by Marth
⚙️ How it works Workflow starts from a manual trigger or form submission with project details. It extracts key input data like client name, email, project type, deadline, and brand folder (optional). A Google Drive folder is automatically created inside a designated parent folder. The shareable link of the newly created folder is generated. A personalized email is composed and sent to the client using Gmail, including project details and folder link. 🛠️ Set up steps Google Drive Setup: Connect your Google Drive credentials in n8n. Set the parent folder ID where all project folders should be created. Gmail Setup: Connect a Gmail account with proper access. Customize the subject and message template in the Gmail node. Input Data Preparation: Ensure the following input fields are provided: client_name contact_email project_type deadline brand_drive_folder (optional) Test & Deploy: Use mock data or a test trigger to validate the workflow. Once confirmed, deploy it with the actual trigger (e.g. webhook, form submission).
by JPres
A Discord bot that responds to mentions by sending messages to n8n workflows and returning the responses. Connects Discord conversations with custom automations, APIs, and AI services through n8n. Full guide on: https://github.com/JimPresting/AI-Discord-Bot/blob/main/README.md Discord Bot Summary Overview The Discord bot listens for mentions, forwards questions to an n8n workflow, processes responses, and replies in Discord. This workflow is intended for all Discord users who want to offer AI interactions with their respective channels. What do you need? You need a Discord account as well as a Google Cloud Project Key Features 1. Listens for Mentions The bot monitors Discord channels for messages that mention it. Optional Configuration**: Can be set to respond only in a specific channel. 2. Forwards Questions to n8n When a user mentions the bot and asks a question: The bot extracts the question. Sends the question, along with channel and user information, to an n8n webhook URL. 3. Processes Data in n8n The n8n workflow receives the question and can: Interact with AI services (e.g., generating responses). Access databases or external APIs. Perform custom logic. n8n formats the response and sends it back to the bot. 4. Replies to Discord with n8n's Response The bot receives the response from n8n. It replies to the user's message in the Discord channel with the answer. Long Responses**: Handles responses exceeding Discord's 2000-character limit by chunking them into multiple messages. 5. Error Handling Includes error handling for: Issues with n8n communication. Response formatting problems. Manages cases where: No question is asked. An invalid response is received from n8n. 6. Typing Indicator While waiting for n8n's response, the bot sends a "typing..." indicator to the Discord channel. 7. Status Update For lengthy n8n processes, the bot sends a message to the Discord channel to inform the user that it is still processing their request. Step-by-Step Setup Guide as per Github Instructions Key Takeaways You’ll configure an n8n webhook to receive Discord messages, process them with your workflow, and respond. You’ll set up a Discord application and bot, grant the right permissions/intents, and invite it to your server. You’ll prepare your server environment (Node.js), scaffold the project, and wire up environment variables. You’ll implement message‐chunking, “typing…” indicators, and robust error handling in your bot code. You’ll deploy with PM2 for persistence and know how to test and troubleshoot common issues. 1. n8n: Create & Expose Your Webhook New Workflow Log into your n8n instance. Click Create Workflow (➕), name it e.g. Discord Bot Handler. Webhook Trigger Add a node (➕) → search Webhook. Set: Authentication: None (or your choice) HTTP Method: POST Path: e.g. /discord-bot Click Execute Node to activate. Copy Webhook URL After execution, copy the Production Webhook URL. You’ll paste this into your bot’s .env. Build Your Logic Chain additional nodes (AI, database lookups, etc.) as required. Format the JSON Response Insert a Function node before the end: return { json: { answer: "Your processed reply" } }; Respond to Webhook Add Respond to Webhook as the final node. Point it at your Function node’s output (with the answer field). Activate Toggle Active in the top‐right and Save. 2. Discord Developer Portal: App & Bot New Application Visit the Discord Developer Portal. Click New Application, name it. Go to Bot → Add Bot. Enable Intents & Permissions Under Privileged Gateway Intents, toggle Message Content Intent. Under Bot Permissions, check: Read Messages/View Channels Send Messages Read Message History Grab Your Token In Bot → click Copy (or Reset Token). Store it securely. Invite Link (OAuth2 URL) Go to OAuth2 → URL Generator. Select scopes: bot, applications.commands. Under Bot Permissions, select the same permissions as above. Copy the generated URL, open it in your browser, and invite your bot. 3. Server Prep: Node.js & Project Setup Install Node.js v20.x sudo apt purge nodejs npm sudo apt autoremove curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash - sudo apt install -y nodejs node -v # Expect v20.x.x npm -v # Expect 10.x.x Project Folder mkdir discord-bot cd discord-bot Initialize & Dependencies npm init -y npm install discord.js axios dotenv 4. Bot Code & Configuration Environment Variables Create .env: nano .env Populate: DISCORD_BOT_TOKEN=your_bot_token N8N_WEBHOOK_URL=https://your-n8n-instance.com/webhook/discord-bot Optional: restrict to one channel TARGET_CHANNEL_ID=123456789012345678 Bot Script Create index.js: nano index.js Implement: Import dotenv, discord.js, axios. Set up client with MessageContent intent. On messageCreate: Ignore bots or non‐mentions. (Optional) Filter by channel ID. Extract and validate the user’s question. Send “typing…” every 5 s; after 20 s send a status update if still processing. POST to your n8n webhook with question, channelId, userId, userName. Parse various response shapes to find answer. If answer.length ≤ 2000, message.reply(answer). Else, split into ~1900‑char chunks at sentence/paragraph breaks and send sequentially. On errors, clear intervals, log details, and reply with an error message. Login client.login(process.env.DISCORD_BOT_TOKEN); 5. Deployment: Keep It Alive with PM2 Install PM2 npm install -g pm2 Start & Monitor pm2 start index.js --name discord-bot pm2 status pm2 logs discord-bot Auto‐Start on Boot pm2 startup Follow the printed command (e.g. sudo env PATH=$PATH:/usr/bin pm2 startup systemd -u your_user --hp /home/your_user) pm2 save 6. Test & Troubleshoot Functional Test In your Discord server: @YourBot What’s the weather like? Expect a reply from your n8n workflow. Common Pitfalls No reply → check pm2 logs discord-bot. Intent Errors → verify Message Content Intent in Portal. Webhook failures → ensure workflow is active and URL is correct. Formatting issues → confirm your Function node returns json.answer. Inspect Raw Data Search your logs for Complete response from n8n: to debug payload shapes. `
by Oneclick AI Squad
This n8n template demonstrates how to create a comprehensive marketing automation and booking system that combines Excel-based lead management with voice-powered customer interactions. The system utilizes VAPI for voice communication and Excel/Google Sheets for data management, making it ideal for restaurants seeking to automate marketing campaigns and streamline booking processes through intelligent voice AI technology. Good to know Voice processing requires active VAPI subscription with per-minute billing Excel operations are handled in real-time with immediate data synchronization The system can handle multiple simultaneous voice calls and lead processing All customer data is stored securely in Excel with proper formatting and validation Marketing campaigns can be scheduled and automated based on lead data How it works Lead Management & Marketing Automation Workflow New Lead Trigger: Excel triggers capture new leads when customers are added to the lead management spreadsheet Lead Preparation: The system processes and formats lead data, extracting relevant details (name, phone, preferences, booking history) Campaign Loop: Automated loop processes through multiple leads for batch marketing campaigns Voice Marketing Call: VAPI initiates personalized voice calls to leads with tailored restaurant offers and booking invitations Response Tracking: All call results and lead responses are logged back to Excel for campaign analysis Booking & Order Processing Workflow Voice Response Capture: VAPI webhook triggers when customers respond to marketing calls or make direct booking requests Response Storage: Customer responses and booking preferences are immediately saved to Excel sheets Information Extraction: System processes natural language responses to extract booking details (party size, preferred times, special requests) Calendar Integration: Booking information is automatically scheduled in restaurant management systems Confirmation Loop: Automated follow-up voice messages confirm bookings and provide additional restaurant information Excel Sheet Structure Lead Management Sheet | Column | Description | |--------|-------------| | lead_id | Unique identifier for each lead | | customer_name | Customer's full name | | phone_number | Primary contact number | | email | Customer email address | | last_visit_date | Date of last restaurant visit | | preferred_cuisine | Customer's food preferences | | party_size_typical | Usual number of guests | | preferred_time_slot | Preferred dining times | | marketing_consent | Permission for marketing calls | | lead_source | How customer was acquired | | lead_status | Current status (new, contacted, converted, inactive) | | last_contact_date | Date of last marketing contact | | notes | Additional customer information | | created_at | Lead creation timestamp | Booking Responses Sheet | Column | Description | |--------|-------------| | response_id | Unique response identifier | | customer_name | Customer's name from call | | phone_number | Contact number used for call | | booking_requested | Whether customer wants to book | | party_size | Number of guests requested | | preferred_date | Requested booking date | | preferred_time | Requested time slot | | special_requests | Dietary restrictions or special occasions | | call_duration | Length of VAPI call | | call_outcome | Result of marketing call | | follow_up_needed | Whether additional contact is required | | booking_confirmed | Final booking confirmation status | | created_at | Response timestamp | Campaign Tracking Sheet | Column | Description | |--------|-------------| | campaign_id | Unique campaign identifier | | campaign_name | Descriptive campaign title | | target_audience | Lead segments targeted | | total_leads | Number of leads contacted | | successful_calls | Calls that connected | | bookings_generated | Number of bookings from campaign | | conversion_rate | Percentage of leads converted | | campaign_cost | Total VAPI usage cost | | roi | Return on investment | | start_date | Campaign launch date | | end_date | Campaign completion date | | status | Campaign status (active, completed, paused) | How to use Setup: Import the workflow into your n8n instance and configure VAPI credentials Excel Configuration: Set up Excel/Google Sheets with the required sheet structure provided above Lead Import: Populate the Lead Management sheet with customer data from various sources Campaign Setup: Configure marketing message templates in VAPI nodes to match your restaurant's branding Testing: Test voice commands such as "I'd like to book a table for tonight" or "What are your specials?" Automation: Enable triggers to automatically process new leads and schedule marketing campaigns Monitoring: Track campaign performance through the Campaign Tracking sheet and adjust strategies accordingly The system can handle multiple concurrent voice calls and scales with your restaurant's marketing needs. Requirements VAPI account** for voice processing and natural language understanding Excel/Google Sheets** for storing lead, booking, and campaign data n8n instance** with Excel/Sheets and VAPI integrations enabled Valid phone numbers** for lead contact and compliance with local calling regulations Customising this workflow Multi-location Support**: Adapt voice AI automation for restaurant chains with location-specific offers Seasonal Campaigns**: Try popular use-cases such as holiday promotions, special event marketing, or loyalty program outreach Integration Options**: The workflow can be extended to include CRM integration, SMS follow-ups, and social media campaign coordination Advanced Analytics**: Add nodes for detailed campaign performance analysis and customer segmentation
by OneClick IT Consultancy P Limited
Automate Customer Feedback Analysis with Google Sheets, WhatsApp, and Email Introduction: Drowning in Data, Starving for Insight? Imagine this: Your team launches a new feature. Feedback starts pouring in emails, support tickets, social media mentions, and survey responses. You know gold is buried in there, but manually reading, tagging, and summarising hundreds, maybe thousands, of comments? It takes days, maybe weeks. By the time you have a clear picture, the moment might have passed. Sounds exhausting, right? What if you could have an AI assistant tirelessly working 24/7, instantly analysing every piece of feedback the moment it arrives? This isn't science fiction anymore. AI-powered automation can transform this slow, manual chore into a real-time insight engine, giving you the pulse of your customer base almost instantly. Let's explore how. What's the Goal? Understanding the Workflow Objective The core challenge is transforming raw, unstructured customer feedback into actionable intelligence quickly and efficiently. The Problem: Manual Overload: Sifting through vast amounts of feedback manually is incredibly time-consuming and prone to human error or bias. Delayed Insights: The lag between receiving feedback and understanding it means missed opportunities and slow responses to critical issues. Inconsistent Analysis: Different team members might interpret or categorize feedback differently, leading to unreliable trend spotting. The AI Solution: Automated Data Collection: Connects directly to feedback sources (surveys, social media, review sites, helpdesks). AI-Powered Analysis: Uses Large Language Models (LLMs) like GPT-4 or Claude to analyze sentiment, extract key topics, and summarize comments. Intelligent Categorization: Automatically tags feedback based on predefined or dynamically identified themes (e.g., "bug report," "feature request," "pricing issue"). Real-time Reporting: Pushes structured insights into dashboards, databases, or triggers notifications for immediate awareness. Outcome: You move from reactive problem-solving based on stale data to proactive, strategic decisions driven by a near real-time understanding of customer sentiment and needs. Why Does It Matter? Achieving 100X Productivity and Efficiency Look, automating feedback isn't just about saving time; it's about scaling your ability to listen and respond smarter, not harder. When you leverage AI, the gains aren't incremental - they're exponential. Here’s why this is a game changer: Blazing Speed: Analyse feedback 100x Faster (or more!) than manual methods. Insights appear in minutes or hours, not days or weeks. Unhuman Scalability: Process virtually unlimited volumes of feedback without needing to scale your human team proportionally. AI doesn't get tired or bored. Consistent Accuracy: AI applies analysis rules consistently, reducing human bias and ensuring reliable categorisation and sentiment scoring over time. Proactive Trend Spotting: Identify emerging issues or popular requests much earlier by analysing aggregated data automatically. Spot patterns humans might miss. Free Up Your Team: Let your talented team focus on acting on insights – improving products, fixing issues, engaging customers – instead of drowning in data entry. How It Works: AI Automation Step by Step Getting this set up is more straightforward than you might think, especially with tools like n8n acting as the central hub. Automated Feedback Triggering CRM/Website Event Node Trigger feedback requests after: Purchases (eCommerce) Support ticket resolution Feature usage (SaaS) Time-Based Node Schedule recurring NPS surveys Customer health check-ups Chat App Node (WhatsApp/Telegram/Messenger) Send conversational feedback prompts: "How was your recent experience with [specific interaction]?" Multi-Channel Feedback Collection Email Node (SendGrid/Mailchimp) Send personalized feedback requests Embed 1-5 rating widgets SMS Node (Twilio) Short mobile surveys: "Reply 1-5: How satisfied with your purchase?" Webhook Node Capture in-app feedback Process chatbot responses Social Media Node Monitor Twitter/X, Instagram mentions Analyze comments for unsolicited feedback AI-Powered Real-Time Analysis OpenAI/ChatGPT Node (Sentiment Analysis) Prompt: "Analyze sentiment (positive/neutral/negative) and key themes from: [customer feedback]" Output fields: Sentiment score (1-5) Urgency flag (high/medium/low) Key topics (billing, support, product, etc.) Translation Node (Optional) Convert multilingual feedback into a consistent language Instant AI Response System Conditional Node (Routing Logic) Positive feedback → Send thank-you + referral ask Neutral feedback → Follow-up question for details Negative feedback → Escalate to the human team AI Response Generator Node Prompt: "Create a personalized response to [feedback type] about [topic] with sentiment [score]" Adjust tone (professional/friendly/empathetic) Escalation Node Route critical issues to the support team with full context Automated Insights & Alerts Dashboard Node Real-time sentiment tracking Emerging issue detection Alert Node (Slack/Teams/Email) Notify teams of negative trends: "3+ complaints about checkout flow in the past hour!" Report Node Auto-generate weekly/monthly summaries: "Top 5 customer pain points this week" Product Board Integration Auto-create feature requests Prioritize based on feedback volume Tools of the Trade: AI & Automation Tech Stack You don't need a massive, complex tech stack. Focus on a few core, powerful tools: n8n: The workflow automation platform. This is the 'glue' that connects everything and orchestrates the process without needing deep coding knowledge. Honestly, it's incredibly versatile. OpenAI (GPT-4/GPT-4o): State-of-the-art LLM for high-quality text analysis, summarization, and classification. Great for complex understanding. Anthropic (Claude 3 Sonnet/Opus): Another top-tier LLM, known for strong performance in analysis and handling large contexts. Often, a great alternative or complement to GPT models. Feedback Sources APIs: Connectors for where your feedback lives (e.g., Typeform, SurveyMonkey, Twitter API, Zendesk API, Google Play/App Store review APIs). Data Storage/Destination: Where the processed insights go (e.g., Google Sheets, Airtable, Notion, PostgreSQL database, BigQuery). (Optional) Visualization Tool: Tools like Metabase, Grafana, Looker Studio, or Power BI to create dashboards from your structured feedback data. What's the Cost? Estimated Budget Let's talk investment. You're mainly looking at: Setup Costs: Primarily your time (or a consultant's) to design and build the initial workflow in n8n. Depending on complexity, this could range from a few hours to a few days. No major software licenses are usually needed upfront if using self-hosted n8n or starting with free/low-tier cloud plans. AI API Calls: You pay per usage to OpenAI/Anthropic. Costs depend heavily on volume but can start from $20-$50/month for moderate usage and scale up. Newer models are getting more cost-effective. n8n Hosting: Free if self-hosted (requires a server), or tiered cloud pricing starting around $20/month. Feedback Source APIs: Some platforms might have API access costs or rate limits on free tiers. Total Estimated Monthly Cost: For many businesses, ongoing costs can range from $50 - $500+ per month, highly dependent on feedback volume and AI model choice. The Return on Investment (ROI) is typically rapid. Consider the hours saved from manual analysis, the value of faster issue resolution, preventing churn, and the benefits of making product decisions based on real-time data. It often pays for itself very quickly. Who Benefits? Target Users and Industries This automated feedback loop isn't niche; it's valuable across many sectors and roles: Top Industries: SaaS (Software as a Service): Understanding user friction, feature requests, bug reports. E-commerce & Retail: Analyzing product reviews, post-purchase surveys, and support chats. Hospitality & Travel: Processing guest reviews, survey feedback. Mobile Apps: Monitoring app store reviews, in-app feedback. Financial Services: Gauging customer satisfaction with services, identifying pain points. Key Roles: Product Managers: Prioritizing features, understanding user needs, tracking launch reception. Customer Experience (CX) / Success Managers: Monitoring customer health, identifying churn risks, and improving support processes. Marketing Teams: Understanding brand perception, campaign feedback, and voice of the customer. Support Leads: Identifying recurring issues, measuring support quality, spotting training needs. This approach works for businesses of all sizes, from startups wanting to stay lean and agile to large enterprises needing to manage massive feedback volumes. How to use workflow? Importing a workflow in n8n is a straightforward process that allows you to use pre-built or shared workflows to save time. Below is a step-by-step guide to import a workflow in n8n, based on the official documentation and community resources. Steps to Import a Workflow in n8n 1. Obtain the Workflow JSON Source the Workflow:** Workflows are typically shared as JSON files or code snippets. You might receive them from: The n8n community (e.g., n8n.io workflows page). A colleague or tutorial (e.g., a .json file or copied JSON code). Exported from another n8n instance (see export instructions below if needed). Format:** Ensure you have the workflow in JSON format, either as a file (e.g., workflow.json) or as text copied to your clipboard. 2. Access the n8n Workflow Editor Log in to n8n:** Open your n8n instance (via n8n Cloud or your - self-hosted instance). Navigate to the Workflows tab in the n8n dashboard. Open a New Workflow:** Click Add Workflow to create a blank workflow, or open an existing workflow if you want to merge the imported workflow. 3. Import the Workflow Option 1: Import via JSON Code (Clipboard): In the n8n editor, click the three dots (⋯) in the top-right corner to open the menu. Select Import from Clipboard. Paste the JSON code of the workflow into the provided text box. Click Import to load the workflow into the editor. Option 2: Import via JSON File: In the n8n editor, click the three dots (⋯) in the top-right corner. Select Import from File. Choose the .json file from your computer. Click Open to import the workflow. Note: If the workflow includes nodes for apps requiring credentials (e.g., Google Sheets), you’ll need to configure those credentials separately after importing.
by Dhruv Dalsaniya
Description: This n8n workflow automates a Discord bot to fetch messages from a specified channel and send AI-generated responses in threads. It ensures smooth message processing and interaction, making it ideal for managing community discussions, customer support, or AI-based engagement. This workflow leverages Redis for memory persistence, ensuring that conversation history is maintained even if the workflow restarts, providing a seamless user experience. How It Works The bot listens for new messages in a specified Discord channel. It sends the messages to an AI model for response generation. The AI-generated reply is posted as a thread under the original message. The bot runs on an Ubuntu server and is managed using PM2 for uptime stability. The Discord bot (Python script) acts as the bridge, capturing messages from Discord and sending them to the n8n webhook. The n8n workflow then processes these messages, interacts with the AI model, and sends the AI's response back to Discord via the bot. Prerequisites to host Bot Sign up on Pella, which is a managed hosting service for Discord Bots. (Easy Setup) A Redis instance for memory persistence. Redis is an in-memory data structure store, used here to store and retrieve conversation history, ensuring that the AI can maintain context across multiple interactions. This is crucial for coherent and continuous conversations. Set Up Steps 1️⃣ Create a Discord Bot Go to the Discord Developer Portal. Click “New Application”, enter a name, and create it. Navigate to Bot > Reset Token, then copy the Bot Token. Enable Privileged Gateway Intents (Presence, Server Members, Message Content). Under OAuth2 > URL Generator, select bot scope and required permissions. Copy the generated URL, open it in a browser, select your server, and click Authorize. 2️⃣ Deploy the Bot on Pella Create a new folder discord-bot and navigate into it: Create and configure an .env file to store your bot token: Copy the code to .env: (You can copy the webhook URL from the n8n workflow) TOKEN=your-bot-token-here WEBHOOK_URL=https://your-domain.tld/webhook/getmessage Create file main.py copy the below code and save it: Copy this Bot script to main.py: import discord import requests import json import os from dotenv import load_dotenv Load environment variables from .env file load_dotenv() TOKEN = os.getenv("TOKEN") WEBHOOK_URL = os.getenv("WEBHOOK_URL") Bot Configuration LISTEN_CHANNELS = ["YOUR_CHANNEL_ID_1", "YOUR_CHANNEL_ID_2"] # Replace with your target channel IDs Intents setup intents = discord.Intents.default() intents.messages = True # Enable message event intents.guilds = True intents.message_content = True # Required to read messages client = discord.Client(intents=intents) @client.event async def on_ready(): print(f'Logged in as {client.user}') @client.event async def on_message(message): if message.author == client.user: return # Ignore bot's own messages if str(message.channel.id) in LISTEN_CHANNELS: try: fetched_message = await message.channel.fetch_message(message.id) # Ensure correct fetching payload = { "channel_id": str(fetched_message.channel.id), # Ensure it's string "chat_message": fetched_message.content, "timestamp": str(fetched_message.created_at), # Ensure proper formatting "message_id": str(fetched_message.id), # Ensure ID is a string "user_id": str(fetched_message.author.id) # Ensure user ID is also string } headers = {'Content-Type': 'application/json'} response = requests.post(WEBHOOK_URL, data=json.dumps(payload), headers=headers) if response.status_code == 200: print(f"Message sent successfully: {payload}") else: print(f"Failed to send message: {response.status_code}, Response: {response.text}") except Exception as e: print(f"Error fetching message: {e}") client.run(TOKEN) Create requirements.txt and copy: discord python-dotenv 3️⃣ Follow the video to set up the bot which will run 24/7 Tutorial - https://www.youtube.com/watch?v=rNnK3XlUtYU Note: Free Plan will expire after 24 hours, so please opt for the Paid Plan in Pella to keep your bot running. 4️⃣ n8n Workflow Configuration The n8n workflow consists of the following nodes: Get Discord Messages (Webhook):** This node acts as the entry point for messages from the Discord bot. It receives the channel_id, chat_message, timestamp, message_id, and user_id from Discord when a new message is posted in the configured channel. Its webhook path is /getmessage and it expects a POST request. Chat Agent (Langchain Agent):** This node processes the incoming Discord message (chat_message). It is configured as a conversational agent, integrating the language model and memory to generate an appropriate response. It also has a prompt to keep the reply concise, under 1800 characters. OpenAI -4o-mini (Langchain Language Model):** This node connects to the OpenAI API and uses the gpt-4o-mini-2024-07-18 model for generating AI responses. It is the core AI component of the workflow. Message History (Redis Chat Memory):** This node manages the conversation history using Redis. It stores and retrieves chat messages, ensuring the Chat Agent maintains context for each user based on their user_id. This is critical for coherent multi-turn conversations. Calculator (Langchain Tool):** This node provides a calculator tool that the AI agent can utilize if a mathematical calculation is required within the conversation. This expands the capabilities of the AI beyond just text generation. Response fromAI (Discord):** This node sends the AI-generated response back to the Discord channel. It uses the Discord Bot API credentials and replies in a thread under the original message (message_id) in the specified channel_id. Sticky Note1, Sticky Note2, Sticky Note3, Sticky Note4, Sticky Note5, Sticky Note:** These are informational nodes within the workflow providing instructions, code snippets for the Discord bot, and setup guidance for the user. These notes guide the user on setting up the .env file, requirements.txt, the Python bot code, and general recommendations for channel configuration and adding tools. 5️⃣ Setting up Redis Choose a Redis Hosting Provider: You can use a cloud provider like Redis Labs, Aiven, or set up your own Redis instance on a VPS. Obtain Redis Connection Details: Once your Redis instance is set up, you will need the host, port, and password (if applicable). Configure n8n Redis Nodes: In your n8n workflow, configure the "Message History" node with your Redis connection details. Ensure the Redis credential ✅ redis-for-n8n is properly set up with your Redis instance details (host, port, password). 6️⃣ Customizing the Template AI Model:** You can easily swap out the "OpenAI -4o-mini" node with any other AI service supported by n8n (e.g., Cohere, Hugging Face) to use a different language model. Ensure the new language model node is connected to the ai_languageModel input of the "Chat Agent" node. Agent Prompt:** Modify the text parameter in the "Chat Agent" node to change the AI's persona, provide specific instructions, or adjust the response length. Additional Tools:** The "Calculator" node is an example of an AI tool. You can add more Langchain tool nodes (e.g., search, data lookup) and connect them to the ai_tool input of the "Chat Agent" node to extend the AI's capabilities. Refer to the "Sticky Note5" in the workflow for a reminder. Channel Filtering:** Adjust the LISTEN_CHANNELS list in the main.py file of your Discord bot to include or exclude specific Discord channel IDs where the bot should listen for messages. Thread Management:** The "Response fromAI" node can be modified to change how threads are created or managed, or to send responses directly to the channel instead of a thread. The current setup links the response to the original message ID (message_reference). 7️⃣ Testing Instructions Start the Discord Bot: Ensure your main.py script is running on Pella. Activate the n8n Workflow: Make sure your n8n workflow is active and listening for webhooks. Send a Message in Discord: Go to one of the LISTEN_CHANNELS in your Discord server and send a message. Verify Response: The bot should capture the message, send it to n8n, receive an AI-generated response, and post it as a thread under your original message. Check Redis: Verify that the conversation history is being stored and updated correctly in your Redis instance. Look for keys related to user IDs. ✅ Now your bot is running in the background! 🚀
by Zacharia Kimotho
This workflow is designed to generate prompts for AI agents and store them in Airtable. It starts by receiving a chat message, processes it to create a structured prompt, categorizes the prompt, and finally stores it in Airtable. 2. Setup Instructions Prerequisites AI model eg Gemini, openAI etc** Airtable base and table or other storage tool** Step-by-Step Guide Clone the Workflow Copy the provided workflow JSON and import it into your n8n instance. Configure Credentials Set up the Google Gemini(PaLM) API account credentials. Set up the Airtable Personal Access Token account credentials. Map Airtable Base and Table Create a copy of the Prompt Library in Airtable. Map the Airtable base and table in the Airtable node. Customize Prompt Template Edit the 'Create prompt' node to customize the prompt template as needed. Configuration Options Prompt Template:** Customize the prompt template in the 'Create prompt' node to fit your specific use case. Airtable Mapping:** Ensure the Airtable base and table are correctly mapped in the Airtable node. 4. Running and Troubleshooting Running the Workflow Trigger the Workflow: Send a chat message to trigger the workflow. Monitor Execution: Use the n8n interface to monitor the workflow execution. Check Completion: Verify that the prompt is stored in Airtable and check the chat interface for the result. Troubleshooting Tips API Issues:** Ensure that the APIs and Airtable credentials are correctly configured. Data Mapping:** Verify that the Airtable base and table are correctly mapped. Prompt Template:** Check the prompt template for any errors or inconsistencies. Use Case Examples This workflow is particularly useful in scenarios where you want to automate the generation and management of AI agent prompts. Here are some examples: Rapid Prototyping of AI Agents: Quickly generate and test different prompts for AI agents in various applications. Content Creation:** Generate prompts for AI models that create blog posts, articles, or social media content. Customer Service Automation:** Develop prompts for AI-powered chatbots to handle customer inquiries and support requests. Educational Tools:** Create prompts for AI tutors or learning assistants. Industries/Professionals: Software Development:** Developers building AI-powered applications. Marketing:** Marketers automating content creation and social media management. Customer Service:** Customer service managers implementing AI-driven chatbots. Education:** Educators creating AI-based learning tools. Practical Value: Time Savings:** Automates the prompt generation process, saving significant time and effort. Improved Prompt Quality:** Leverages Google Gemini and structured prompt engineering principles to generate more effective prompts. Centralized Prompt Management:** Stores prompts in Airtable for easy access, organization, and reuse. 4. Running and Troubleshooting Running the Workflow:** Activate the workflow in n8n. Send a chat message to the webhook URL configured in the "When chat message received" node. Monitor the workflow execution in the n8n editor. Monitoring Execution:** Check the execution log in n8n to see the data flowing through each node and identify any errors. Checking for Successful Completion:** Verify that a new record is created in your Airtable base with the generated prompt, name, and category. Confirm that the "Return results" node sends back confirmation of the prompt in the chat interface. Troubleshooting Tips:** Error:** 400: Bad Request in the Google Gemini nodes: Cause:** Invalid API key or insufficient permissions. Solution:** Double-check your Google Gemini API key and ensure that the API is enabled for your project. Error:** Airtable node fails to create a record: Cause:** Invalid Airtable credentials, incorrect Base ID or Table ID, or mismatched column names. Solution:** Verify your Airtable API key, Base ID, Table ID, and column names. Ensure that the data types in n8n match the data types in your Airtable columns. Follow me on Linkedin for more
by Ankur Pata
✨ What It Does Mello is a Claude-powered Slack assistant that helps you stay on top of unread messages across all your channels. It: Summarizes conversations contextually using Claude AI. Generates reply suggestions and sends them as private (ephemeral) Slack messages. Lets you respond instantly with one-click AI-suggested replies. Perfect for busy teams, founders, and anyone looking to reduce Slack noise and save hours each week. 🔧 Setup Instructions Create a Slack App Go to Slack API → Your Apps Click Create New App and set it up for your workspace Under OAuth & Permissions, add: Bot Token Scopes: commands, chat:write, channels:history, users:read User Token Scopes: channels:history, chat:write Enable Interactivity, and point the Request URL to your n8n webhook (e.g. /slash-summarize) Add Claude API Get an API key from Claude (Anthropic) In n8n, set up the Claude API credential (or switch to OpenAI) Import This Workflow Go to your n8n instance, click Import, and paste this template Update any placeholders (Slack app, Claude key, webhook URLs) Follow the inline sticky notes for guidance Test It Type /summarize in any Slack channel Mello will fetch unread messages, summarize them, and show reply buttons in a private message ⏱ Setup time: ~10 minutes 🛠 Workflow Highlights Slash command trigger (/summarize) Slack API integration to fetch messages Claude AI for contextual summaries Reply suggestions with smart buttons Private Slack delivery (ephemeral messages) Designed to be easily extended (e.g. add support for OpenAI, custom storage) 🔒 Note This is a lite preview of the full Mello workflow. ✅ The full version includes: Slack reply buttons with thread context Full OAuth flow with token storage MongoDB integration Custom Claude/OpenAI configuration Hosted version with onboarding, branding & support 💡 Want access to the complete version? 📩 Email nina@baloon.dev
by Mohan Gopal
Overview This release introduces a Voice-Enabled Tour Recommendation System that leverages n8n, ElevenLabs Voice Agent, OpenAI GPT-4o, and Pinecone Vector DB to deliver personalized travel itineraries based on spoken input. Users speak their preferences to the ElevenLabs voice agent, which then triggers an n8n workflow that returns a tailored tour plan. Features Voice interaction with AI-powered travel agent via ElevenLabs Uses ChatGPT-4o for contextual understanding and generation Dynamic query handling with vector-based search using Pinecone Fast response generation using n8n webhook Modular agent memory and role design for scalable enhancement Pre-requisites n8n account with workflow creation access ElevenLabs account with agent and webhook setup OpenAI API key (GPT-4o access) Pinecone account for vector database A list of vectorized tour packages using this n8n embedder (https://creators.n8n.io/workflows/5085) Setup Instructions Step 1: Configure the Voice Agent Webhook in ElevenLabs Use POST method Webhook URL: https://... Breakdown voice input into: Destination Type of tour Number of days Number of passengers Step 2: Set Up the AI Agent Prompt in ElevenLabs Use a conversational style with summaries, clarifying questions, and affirmations. Example Prompt: “You use a natural speech style and periodically summarize... Your goal is to help callers create a personalized tour plan.” Step 3: Select LLM LLM: GPT-4o Mini Memory window: Up to 5 contexts Step 4: Integrate Tools Use Custom Tool: n8n ID: tool_xxxxxx Tool Description: “Generates travel plan once the details are collected” Step 5: Build n8n Workflow Trigger: Webhook (POST) Process user input: Tour Recommendation AI Agent Use OpenAI Chat Model (GPT-4o) for reasoning Query Pinecone Vector Store using Tour Builder Q&A node Respond with structured Itinerary Plan via webhook response How to use: Execute the n8n workflow (the webhook waits for the voice trigger from elevenlabs) Start the Elevenlabs Voice Agent Request for a tour plan to any destination giving the details of your tour preferences. Wait for the Voice Agent to respond back with tour package suggestions after fetching the tour details from the n8n workflow. Close the conversation. | Area | Improvement | | ------------------ | ----------------------------------------------------- | | 🔉 Voice UX | Natural-sounding travel agent using ElevenLabs | | 💡 Personalization | ChatGPT-4o adapts based on travel style & preferences | | 📚 Knowledge Base | Pinecone-powered vector retrieval of real tour data | | 🔁 Reusability | Modular workflow with reusable embedding tools | | ⚙️ System Design | Separation of memory, logic, and data layers | Who is this for? Travel Agencies & DMCs Offer ultra-personalized packages based on customer queries. Let AI do the matching. Tour Package Aggregators Auto-curate and send matching packages from your catalog — no manual searching needed. Content & Marketing Teams Craft customized tour recommendations for email campaigns and newsletters. Tech-enabled Travel Startups Embed this intelligence in your workflows, CRMs, or chatbots to delight customers.
by JaredCo
This n8n workflow demonstrates how to transform natural language date and time expressions into structured data with 96%+ accuracy. Parse complex expressions like "early next July", "2 weeks after project launch", or "end of Q3" into precise datetime objects with confidence scoring, timezone intelligence, and business rules validation for any automation workflow. Good to know Achieves 96%+ accuracy on complex natural language date expressions At time of writing, this is the most advanced open-source date parser available Includes AI learning that improves over time with user corrections Supports 6 languages with auto-detection (English, Spanish, French, German, Italian, Portuguese) Sub-millisecond response times with intelligent caching Enterprise-grade with business intelligence and timezone handling How it works Natural Language Input**: Receives date expressions via webhook, form, email, or chat AI-Powered Parsing**: Your world-class date parser processes the text through: 50+ custom rule patterns for complex expressions Multi-language auto-detection and smart translation Confidence scoring (0.0-1.0) for AI decision-making Ambiguity detection with helpful suggestions Business Intelligence**: Applies enterprise rules automatically: Holiday calendar awareness (US + International) Working hours validation and warnings Business day auto-adjustment Timezone normalization (IANA format) Smart Scheduling**: Creates calendar events with: Structured datetime objects (start/end times) Confidence metadata for workflow decisions Alternative interpretations for ambiguous inputs Rich context for follow-up actions Integration Ready**: Outputs connect seamlessly to: Google Calendar, Outlook, Apple Calendar CRM systems (HubSpot, Salesforce) Project management tools (Notion, Asana) Communication platforms (Slack, Teams) How to use The webhook trigger receives natural language date requests from any source Replace the MCP server URL with your deployed date parser endpoint Configure timezone preferences for your organization Customize business rules (working hours, holidays) in the parser settings Connect calendar integration nodes for automatic event creation Add notification workflows for scheduling confirmations Use Cases Meeting Scheduling**: "Schedule our quarterly review for early Q3" Project Management**: "Set deadline 2 weeks after product launch" Event Planning**: "Book venue for the weekend before Labor Day" Personal Assistant**: "Remind me about dentist appointment next Tuesday morning" International Teams**: "Team standup tomorrow morning" (auto-timezone conversion) Seasonal Planning**: "Launch campaign in late spring 2025" Requirements Natural Language Date Parser MCP server (provided code) Webhook endpoint or form trigger Calendar integration (Google Calendar, Outlook, etc.) Optional: Slack/Teams for notifications Optional: Database for learning pattern storage Customizing this workflow Multi-language Support**: Enable auto-detection for global teams Business Rules**: Configure company holidays and working hours Learning System**: Enable AI learning from user corrections Integration Depth**: Connect to your existing calendar and CRM systems Confidence Thresholds**: Set minimum confidence levels for auto-scheduling Ambiguity Handling**: Route unclear dates to human review or clarification requests Sample Input/Output Input Examples: "early next July" "2 weeks after Thanksgiving" "next Wednesday evening" "Q3 2025" "mañana por la mañana" (Spanish) "first thing Monday" Rich Output: { "parsed": [{ "start": "2025-07-01T00:00:00Z", "end": "2025-07-10T23:59:59Z", "timezone": "America/New_York" }], "confidence": 0.95, "method": "custom_rules", "business_insights": [{ "type": "business_warning", "message": "Selected date range includes July 4th holiday" }], "predictions": [{ "type": "time_preference", "suggestion": "You usually schedule meetings at 10 AM" }], "ambiguities": [], "alternatives": [{ "interpretation": "Early July 2026", "confidence": 0.15 }], "performance": { "cache_hit": true, "response_time": "0.8ms" } } Why This Workflow is Unique World-Class Accuracy**: 96%+ success rate on complex expressions AI Learning**: Improves over time with user feedback Global Ready**: Multi-language and timezone intelligence Business Smart**: Enterprise rules and holiday awareness Performance Optimized**: Sub-millisecond cached responses Context Aware**: Provides confidence scores and alternatives for AI decision-making Transform your scheduling workflows from rigid form inputs to natural, conversational date requests that your users will love!
by David Olusola
AI Lead Capture System - Complete Setup Guide Prerequisites n8n instance (cloud or self-hosted) Google AI Studio account (free tier available) Google account for Sheets integration Website with chat widget capability Phase 1: Core Infrastructure Setup Step 1: Set Up Google AI Studio Go to Google AI Studio Create account or sign in with Google Navigate to "Get API Key" Create new API key for your project Copy and securely store the API key Free tier limits: 15 requests/minute, 1 million tokens/month Step 2: Configure Google Sheets Create new Google Sheet for lead storage Add column headers (exact names): Full Name Company Name Email Address Phone Number Project Intent/Needs Project Timeline Budget Range Preferred Communication Channel How they heard about DAEX AI Copy the Google Sheet ID from URL (between /d/ and /edit) Ensure sheet is accessible to your Google account Step 3: Import n8n Workflow Open your n8n instance Create new workflow Click "..." menu → Import from JSON Paste the provided workflow JSON Workflow will appear with all nodes connected Phase 2: Credential Configuration Step 4: Set Up Google Gemini API In n8n, go to Credentials → Add Credential Search for "Google PaLM API" Enter your API key from Step 1 Test connection Link to the "Google Gemini Chat Model" node Step 5: Configure Google Sheets Access Go to Credentials → Add Credential Select "Google Sheets OAuth2 API" Follow OAuth flow to authorize your Google account Test connection with your sheet Link to the "Google Sheets" node Phase 3: Workflow Customization Step 6: Update Company Information Open the AI Agent node In the system message, replace all mentions of: Company name and description Service offerings and specializations FAQ knowledge base Typical project timelines and pricing ranges Adjust conversation tone to match your brand voice Step 7: Configure Lead Qualification Fields In the AI Agent system message, modify the required information list: Add/remove qualification questions Adjust budget ranges for your services Customize timeline options Update communication channel preferences In Google Sheets node, update column mappings if you changed fields Step 8: Set Up Sheet Integration Open Google Sheets node Click on Document ID dropdown Select your lead capture sheet Verify all column mappings match your sheet headers Test with sample data Phase 4: Website Integration Step 9: Get Webhook URL Open Webhook node in n8n Copy the webhook URL (starts with your n8n domain) Note: URL format is https://your-n8n-domain.com/webhook/[unique-id] Step 10: Connect Your Chat Widget Choose your integration method: Option A: Direct JavaScript Integration javascript// Add to your website function sendMessage(message, sessionId) { fetch('YOUR_WEBHOOK_URL', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ message: message, sessionId: sessionId || 'visitor-' + Date.now() }) }) .then(response => response.json()) .then(data => { // Display AI response in your chat widget displayMessage(data.message); }); } Option B: Chat Platform Webhook Open your chat platform settings (Intercom, Crisp, etc.) Find webhook/integration section Add webhook URL pointing to your n8n endpoint Configure to send message and session data Option C: Zapier/Make.com Integration Create new Zap/Scenario Trigger: New chat message from your platform Action: HTTP POST to your n8n webhook Map message content and session ID Phase 5: Testing & Optimization Step 11: Test Complete Flow Send test message through your chat widget Verify AI responds appropriately Check conversation context is maintained Confirm lead data appears in Google Sheets Test with various conversation scenarios Step 12: Monitor Performance Check n8n execution logs for errors Monitor Google Sheets for data quality Review conversation logs for improvement opportunities Track response times and conversion rates Step 13: Fine-Tune Conversations Analyze real conversation logs Update system prompts based on common questions Add new FAQ knowledge to the AI agent Adjust qualification questions based on lead quality Optimize for your specific customer patterns Phase 6: Advanced Features (Optional) Step 14: Add Lead Scoring Create new column in Google Sheets for "Lead Score" Update AI agent to calculate scores based on: Budget range (higher budget = higher score) Timeline urgency (sooner = higher score) Project complexity (complex = higher score) Add conditional formatting in Google Sheets to highlight high-value leads Step 15: Set Up Notifications Add email notification node after Google Sheets Configure to send alerts for high-priority leads Include lead details and conversation summary Set up different notification rules for different lead scores Step 16: Analytics Dashboard Connect Google Sheets to Google Data Studio or similar Create dashboard showing: Daily lead volume Conversion rates by source Average qualification time Lead quality scores Revenue pipeline from captured leads Troubleshooting Common Issues AI Not Responding Check Google Gemini API key validity Verify API quota not exceeded Review n8n execution logs for errors Data Not Saving to Sheets Confirm Google Sheets permissions Check column name matching Verify sheet ID is correct Chat Widget Not Connecting Test webhook URL directly with curl/Postman Verify JSON format matches expected structure Check CORS settings if browser-based integration Conversation Context Lost Ensure sessionId is unique per visitor Check memory node configuration Verify sessionId is passed consistently
by Anurag
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description This workflow automates document processing and structured table extraction using the Nanonets API. You can submit a PDF file via an n8n form trigger or webhook—the workflow then forwards the document to Nanonets, waits for asynchronous parsing to finish, retrieves the results (including header fields and line items/tables), and returns the output as an Excel file. Ideal for automating invoice, receipt, or order data extraction with downstream business use. How It Works A document is uploaded (via n8n form or webhook). The PDF is sent to the Nanonets Workflow API for parsing. The workflow waits until processing is complete. Parsed results are fetched. Both top-level fields and any table rows/line items are extracted and restructured. Data is exported to Excel format and delivered to the requester. Setup Steps Nanonets Account: Register for a Nanonets account and set up a workflow for your specific document type (e.g., invoice, receipt). Credentials in n8n: Add HTTP Basic Auth credentials in n8n for the Nanonets API (never store credentials directly in node parameters). Webhook/Form Configuration: Option 1: Configure and enable the included n8n Form Trigger node for document uploads. Option 2: Use the included Webhook node to accept external POSTs with a PDF file. Adjust Workflow: Update any HTTP nodes to use your credential profile. Insert your Nanonets Workflow ID in all relevant nodes. Test the Workflow: Enable the workflow and try with a sample document. Features Accepts documents via n8n Form Trigger or direct webhook POST. Securely sends files to Nanonets for document parsing (credentials stored in n8n credentials manager). Automatically waits for async processing, checking Nanonets until results are ready. Extracts both header data and all table/line items into a tabular format. Exports results as an Excel file download. Modular nodes allow easy customization or extension. Prerequisites Nanonets account** with workflow configured for your document type. n8n** instance with HTTP Request, Webhook/Form, Code, and Excel/Spreadsheet nodes enabled. Valid HTTP Basic Auth credentials** saved in n8n for API access. Example Use Cases | Scenario | Benefit | |-----------------------|--------------------------------------------------| | Invoice Processing | Automated extraction of line items and totals | | Receipt Digitization | Parse amounts and charges for expense reports | | Purchase Orders | Convert scanned POs into structured Excel sheets | Notes You must set up credentials in the n8n credentials manager—do not store API keys directly in nodes. All configuration and endpoints are clearly explained with inline sticky notes in the workflow editor. Easily adaptable for other document types or similar APIs—just modify endpoints and result mapping.
by Ludovic Bablon
Who is this template for? This workflow template is built for SEO specialists and digital marketers looking to uncover keyword opportunities effortlessly. It uses Google's autocomplete magic to help you spot what's trending. How it works Just give it a keyword. The workflow then queries Google and collects all autocomplete suggestions by appending every letter from A to Z to your keyword. Output example with the keyword "n8n" : You can sort these keywords and give them to an LLM to produce entity-enriched text. Setup instructions It works right out of the box. 🛠️ However, you may want to tweak the output format to better fit your use case. Exporting the Keywords You can easily add a node to export the keywords in various ways: via a webhook by email as a file (e.g., saved to Google Drive) directly to a website Adapting the Language Autocomplete results depend on the selected language. You can change the &hl=en parameter in the Google Autocomplete node. Replace the "en" part with the language code of your choice. Examples: &hl=fr → French &hl=es → Spanish &hl=de → German