by Jimleuk
This n8n workflow demonstrates how to manage your Qdrant vector store when there is a need to keep it in sync with local files. It covers creating, updating and deleting vector store records ensuring our chatbot assistant is never outdated or misleading. Disclaimer This workflow depends on local files accessed through the local filesystem and so will only work on a self-hosted version of n8n at this time. It is possible to amend this workflow to work on n8n cloud by replacing the local file trigger and read file nodes. How it works A local directory where bank statements are downloaded to is monitored via a local file trigger. The trigger watches for the file create, file changed and file deleted events. When a file is created, its contents are uploaded to the vector store. When a file is updated, its previous records are replaced. When the file is deleted, the corresponding records are also removed from the vector store. A simple Question and Answer Chatbot is setup to answer any questions about the bank statements in the system. Requirements A self-hosted version of n8n. Some of the nodes used in this workflow only work with the local filesystem. Qdrant instance to store the records. Customising the workflow This workflow can also work with remote data. Try integrating accounting or CRM software to build a managed system for payroll, invoices and more. Want to go fully local? A version of this workflow is available which uses Ollama instead. You can download this template here: https://drive.google.com/file/d/189F1fNOiw6naNSlSwnyLVEm_Ho_IFfdM/view?usp=sharing
by Jimleuk
This n8n workflow demonstrates a simple multi-agent setup to perform the task of competitor research. It showcases how using the HTTP request tool could reduce the number of nodes needed to achieve a workflow like this. How it works For this template, a source company is defined by the user which is sent to Exa.ai to find competitors. Each competitor is then funnelled through 3 AI agents that will go out onto the internet and retrieve specific datapoints about the competitor; company overview, product offering and customer reviews. Once the agents are finished, the results are compiled into a report which is then inserted in a notion database. Check out an example output here: https://jimleuk.notion.site/2d1c3c726e8e42f3aecec6338fd24333?v=de020fa196f34cdeb676daaeae44e110&pvs=4 Requirements An OpenAI account for the LLM. Exa.ai account for access to their AI search engine. SerpAPI account for Google search. Firecrawl.dev account for webscraping. Notion.com account for database to save final reports. Customising the workflow Add additional agents to gather more datapoints such as SEO keywords and metrics. Not using notion? Feel free to swap this out for your own database.
by Jimleuk
This n8n template is one of a 3-part series exploring use-cases for clustering vector embeddings: Survey Insights Customer Insights Community Insights This template demonstrates the Customer Insights scenario where Trustpilot reviews can be quickly grouped by similarity and an AI agent can generate insights on those groupings. With this workflow, marketers can save days and even weeks of work breaking down their own or competitor reviews and identify frequently mentioned positives and negatives. Sample Output: https://docs.google.com/spreadsheets/d/e/2PACX-1vQ6ipJnXWXgr5wlUJnhioNpeYrxaIpsRYZCwN3C-fFXumkbh9TAsA_JzE0kbv7DcGAVIP7az0L46_2P/pubhtml How it works Trustpilot reviews are scraped for a particular company using the HTTP request node. Reviews are then inserted into a Qdrant collection carefully tagged with the question and Trustpilot metadata. Reviews are fetched and put through a clustering algorithm using the Python Code node. The Qdrant points are returned in clustered groups. Each group is looped to fetch the payloads of the points and feed them to the AI agent to summarise and generate insights for. The resulting insights and raw responses are then saved to the Google Spreadsheet for further analysis by the marketer. Requirements Qdrant Vectorstore for storing embeddings. OpenAI account for embeddings and LLM. Customising the Template Adjust clustering parameters which make sense for your data. Consider expanding date range of reviews for insights over common intervals: 3mth, 6mth and YTD.
by Ayoub
Who is this for? This workflow is ideal for developers, content creators, or customer support teams looking to automate text-to-speech conversion using OpenAI. What problem does this solve? It automates the process of converting text inputs into speech, reducing manual effort and enhancing productivity. What this workflow does: This workflow triggers when a text input is received via a webhook, converts it into audio using the OpenAI API, and sends the generated speech back through a webhook response. Setup: Ensure you have an OpenAI API key (you can get it from OpenAI website). Set up the webhook URL and parameters. Configure the OpenAI node with your API key (Create New Credentials). set up the responde to webhook node.
by Jimleuk
This n8n workflow is a proof-of-concept template exploring how we might work with multimodal LLMs and their multi-image analysis capabilities. In this demo, we compare 2 screenshots of a webpage taken at different timestamps and pass both to our multimodal LLM for a visual comparison of differences. Handling multiple binary inputs (ie. images) in an AI request is supported by n8n's basic LLM node. How it works This template is intended to run as 2 parts: first to generate the base screenshots and next to run the visual regression test which captures fresh screenshots. Starting with a list of webpages captured in a Google sheet, base screenshots are captured for each using a external web scraping service called Apify.com (I prefer Apify but feel free to use whichever web scraping service available to you) These base screenshots are uploaded to Google Drive and will be referenced later when we run our testing. Phase 2 of the workflow, we'll use a scheduled trigger to fire sometime in the future which will reuse our web scraping service to generate fresh screenshots of our desired webpages. Next, re-download our base screenshots in parallel and with both old and new captures, we'll pass these to our LLM node. In the LLM node's options, we'll define 2 "user message" inputs with the type of binary (data) for our images. Finally, we'll prompt our LLM with our testing criteria and capture the regressions detected. Note, results will vary depending on which LLM you use. A final report can be generated using the LLM's output and is uploaded to Linear. Requirements Apify.com API key for web screenshotting service Google Drive and Sheets access to store list of webpages and captures Customising this workflow Have your own preferred web screenshotting service? Feel free to swap out Apify with your service of choice. If the web screenshot is too large, it may prove difficult for the LLM to spot differences with precision. Try splitting up captures into smaller images instead.
by Polina Medvedieva
This n8n workflow template lets you easily generate comprehensive FAQ (Frequently Asked Questions) content for multiple services (or any items or pages you need to add the FAQs to). Simply provide the Google Sheets document containing the items to scrape, and the workflow automatically creates detailed, AI-enhanced FAQ documents. How it works The workflow reads data from a Google Sheets document containing information about different services and categories (again, in your case - whatever objects you need). For each service and category, it generates a set of standard questions and answers covering setup, permissions, integrations, use cases, and pricing benefits. An AI model (OpenAI's GPT) is used to enhance or complete some of the answers, making the content more comprehensive and natural-sounding. The workflow formats the Q&A pairs, combining AI-generated content with predefined answers where applicable. It creates a text file (JSON) for each service or category, containing the formatted Q&A pairs. The generated files are saved to specific folders in Google Drive, organized by the type of integration (native, credential-only, non-native) or category. After processing each service or category, it updates the status in the original Google Sheets document to mark it as completed. Ideal for: Marketing teams: Rapidly create comprehensive FAQ documents for multiple products or services. Customer support: Generate consistent and detailed answers for common customer queries. Product managers: Easily maintain up-to-date documentation as products evolve. Content creators: Streamline the process of creating informative content about various offerings. Accounts required Google account (for Google Sheets and Google Drive) OpenAI API account (for AI-enhanced content generation) n8n.io account (for workflow execution) Set up instructions Set up the required credentials for Google Sheets, Google Drive, and OpenAI when you first open the workflow. Prepare your Google Sheets document with the service/category information. Here's an example of Google Sheet. Fill the "Define Sheets" node with your sheets Adjust the folder IDs in the "Prepare Job" node to match your Google Drive structure. Configure the OpenAI model settings in the "OpenAI Chat Model" node if needed. Test the workflow with a small subset of data before running it on your entire dataset. Adjust the questions asked in the "Create your Q&A templates" section After testing, activate your workflow for automated FAQ generation. 🙏 Big, big kudos to Jim Le for his ideas, input and support when building this workflow. Your approach to AI workflows is always super helpful!
by Mario
Purpose Use a lightweight Voice Interface, for you and your entire organization, to interact with an AI Supervisor, a personal AI Assistant, which has access to your custom workflows. You can also connect the supervisor to your already existing Agents. Demo & Explanation How it works After recording a message in the Vagent App, it gets transcribed and sent in combination with a session ID to the registered webhook The Main Agent acts as a router. I interprets the message while using the stored chat history (bound to the session ID) and chooses which tool to use to perform the required action and. Tools on this level are workflows, which contain subordinated Agents. Since the Main Agent interprets the original message, the raw input is passed to the Tools/Sub-Agents as a separate parameter Within the Sub-Agents the actual processing takes place. Each of those has it’s separate chat memory (with a suffix to the main session ID), to achieve a clear separation of concerns Depending on the required action an HTTP Request Tool is called. The result is being formatted in Markdown and returned to the Main Agent with an additional short prompt, so it does not get interpreted by the Main Agent. Drafts are separated from a short message by added indentation (angle brackets). If some information is missing, no tool is called just yet, instead a message is returned back to the user The Main Agent then outputs the result from the called Sub-Agent. If a draft is included, it gets separated from the spoken output Finally the formatted output is returned as response to the webhook. The message is split into a spoken and a text version, which enables the App to read out loud unnecessary information like drafts in this example See the full documentation of Vagent: https://vagent.io/docs Setup Import this workflow into your n8n instance Follow the instructions given in the sticky notes on the canvas Setup your credentials. OpenAI can be replaced by another LLM in the workflow, but is required for the App to work. Google Calendar and Notion are required for all scenarios to work Copy the Webhook URL from the Webhook node of the main workflow Download the Vagent App from https://vagent.io In the settings paste your OpenAI API Token, the Webhook URL and the password defined for Header Auth Now you can use the App to interact with the Multi-Agent using your Voice by tapping the Mic symbol in the App to record your message. To use the chat trigger (for testing) properly, temporarily disable the nodes after the Tools Agent.
by Mark Shcherbakov
Video Guide I prepared a detailed guide that showed the whole process of building an AI bot, from the simplest version to the most complex in a template. .png) Who is this for? This workflow is ideal for developers, chatbot enthusiasts, and businesses looking to build a dynamic Telegram bot with memory capabilities. The bot leverages OpenAI's assistant to interact with users and stores user data in Supabase for personalized conversations. What problem does this workflow solve? Many simple chatbots lack context awareness and user memory. This workflow solves that by integrating Supabase to keep track of user sessions (via What this workflow does This Telegram bot template connects with OpenAI to answer user queries while storing and retrieving user information from a Supabase database. The memory component ensures that the bot can reference past interactions, making it suitable for use cases such as customer support, virtual assistants, or any application where context retention is crucial. 1.Receive New Message: The bot listens for incoming messages from users in Telegram. Check User in Database: The workflow checks if the user is already in the Supabase database using the Create New User (if necessary): If the user does not exist, a new record is created in Supabase with the telegram_id and a unique Start or Continue Conversation with OpenAI: Based on the user’s context, the bot either creates a new thread or continues an existing one using the stored Merge Data: User-specific data and conversation context are merged. Send and Receive Messages: The message is sent to OpenAI, and the response is received and processed. Reply to User: The bot sends OpenAI’s response back to the user in Telegram. Setup Create a Telegram Bot using the Botfather and obtain the bot token. Set up Supabase: Create a new project and generate a Create a new table named create table public.telegram_users ( id uuid not null default gen_random_uuid (), date_created timestamp with time zone not null default (now() at time zone 'utc'::text), telegram_id bigint null, openai_thread_id text null, constraint telegram_users_pkey primary key (id) ) tablespace pg_default; OpenAI Setup: Create an OpenAI assistant and obtain the Customize your assistant’s personality or use cases according to your requirements. Environment Configuration in n8n: Configure the Telegram, Supabase, and OpenAI nodes with the appropriate credentials. Set up triggers for receiving messages and handling conversation logic. Set up OpenAI assistant ID in "++OPENAI - Run assistant++" node.
by Victor Gold
Telegram Bot Starter template workflow + n8n AI Agent Chatbot provides a foundational setup for creating powerful Telegram bots with n8n. It handles incoming messages, photos, files, and voice notes, making it an excellent starting point for developers looking to create bots for customer engagement, support, or interactive services. Sign up to n8n now → and try it! Key Features: Dynamic Message Handling: Respond to text messages, photos, files, and more. Modular Design: Easily integrate additional workflows such as user registration, payment modules, or custom commands. Error Handling: Ensure the bot gracefully manages errors and user inputs. Extensibility: This workflow is the base for building any Telegram bot. Additional modules, such as a user registration module, payment integration, and user profile management, are available for easy connection to expand the bot’s functionality. ✍🏻Use the Telegram user registration workflow → 💵Use the Telegram Payment, Invoicing and Refund Workflow for Stars → Who Can Use This Workflow? Developers looking for a quick way to build and customize Telegram bots. Businesses and service providers who need customer interaction automation. Setup Instructions: Replace Telegram credentials with your own API credentials. Customize responses for different message types (text, photo, file). If integrating with external services (like Google Sheets), update the necessary credentials and links. UPDATES: 🔥 Get the most up-to-date and expanded version → June 25: New! AI Agent + Setup Instructions Simple setup instructions and examples are included inside the workflow as sticky notes. Sep 24: Improved message handler: Updated logic to handle various types of messages using Switch (text, photo, file, voice, and callback). Payment processing: Added new nodes for sending invoices and handling payments via Telegram Aug 24: Changed processing of system events: “new user” and ‘user who blocked bot’ events Please reach out to Victor if you need further assistance with your n8n workflows and automations! Sign up to n8n →, you have to try it!
by Arnaud MARIE
Monthly Spotify Track Archiving and Playlist Classification This n8n workflow allows you to automatically archive your monthly Spotify liked tracks in a Google Sheet, along with playlist details and descriptions. Based on this data, Claude 3.5 is used to classify each track into multiple playlists and add them in bulk. Who is this template for? This workflow template is perfect for Spotify users who want to systematically archive their listening history and organize their tracks into custom playlists. What problem does this workflow solve? It automates the monthly process of tracking, storing, and categorizing Spotify tracks into relevant playlists, helping users maintain well-organized music collections and keep a historical record of their listening habits. Workflow Overview Trigger Options**: Can be initiated manually or on a set schedule. Spotify Playlists Retrieval**: Fetches the current playlists and filters them by owner. Track Details Collection**: Retrieves information such as track ID and popularity from the user’s library. Audio Features Fetching**: Uses Spotify's API to get audio features for each track. Data Merging**: Combines track information with their audio features. Duplicate Checking**: Filters out tracks that have already been logged in Google Sheets. Data Logging**: Archives new tracks into a Google Sheet. AI Classification**: Uses an AI model to classify tracks into suitable playlists. Playlist Updates**: Adds classified tracks to the corresponding playlists. Setup Instructions Credentials Setup: Make sure you have valid Spotify OAuth2 and Google Sheets access credentials. Trigger Configuration: Choose between manual or scheduled triggers to start the workflow. Google Sheets Preparation: Set up a Google Sheet with the necessary structure for logging track details. Spotify Playlists Setup: Have a diverse range of playlists and exhaustive description (see example) ready to accommodate different music genres and moods. Customization Options Adjust Playlist Conditions**: Modify the AI model’s classification criteria to align with your personal music preferences. Enhance Track Analysis**: Incorporate additional audio features or external data sources for more refined track categorization. Personalize Data Logging**: Customize which track attributes to log in Google Sheets based on your archival preferences. Configure Scheduling**: Set a preferred schedule for periodic track archiving, e.g., monthly or weekly. Cost Estimate For 300 tracks, the token usage amounts to approximately 60,000 tokens (58,000 for input and 2,000 for completion), costing around 20 cents with Claude 3.5 Sonnet (as of October 2024). Playlists' Description Examples | Playlist Name | Playlist Description | |-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Classique | Indulge in the timeless beauty of classical music with this refined playlist. From baroque to romantic periods, this collection showcases renowned compositions. | | Poi | Find your flow with this dynamic playlist tailored for poi, staff, and ball juggling. Featuring rhythmic tracks that complement your movements. | | Pro Sound | Boost your productivity and focus with this carefully selected mix of concentration-enhancing music. Ideal for work or study sessions. | | ChillySleep | Drift off to dreamland with this soothing playlist of sleep-inducing tracks. Gentle melodies and ambient sounds create a peaceful atmosphere for restful sleep. | | To Sing | Warm up your vocal cords and sing your heart out with karaoke-friendly tracks. Featuring popular songs, perfect for solo performances or group sing-alongs. | | 1990s | Relive the diverse musical landscape of the 90s with this eclectic mix. From grunge to pop, hip-hop to electronic, this playlist showcases defining genres. | | 1980s | Take a nostalgic trip back to the era of big hair and neon with this 80s playlist. Packed with iconic hits and forgotten gems, capturing the energy of the decade.| | Groove Up | Elevate your mood and energy with this upbeat playlist. Featuring a mix of feel-good tracks across various genres to lift your spirits and get you moving. | | Reggae & Dub | Relax and unwind with the laid-back vibes of reggae and dub. This playlist combines classic reggae tunes with deep, spacious dub tracks for a chilled-out vibe. | | Psytrance | Embark on a mind-bending journey with this collection of psychedelic trance tracks. Ideal for late-night dance sessions or intense focus. | | Cumbia | Sway to the infectious rhythms of Cumbia with this lively playlist. Blending traditional Latin American sounds with modern interpretations for a danceable mix. | | Funky Groove | Get your body moving with this collection of funk and disco tracks. Featuring irresistible basslines and catchy rhythms, perfect for dance parties. | | French Chanson | Experience the romance and charm of France with this mix of classic and modern French songs, capturing the essence of French musical culture. | | Workout Motivation | Push your limits and power through your exercise routine with this high-energy playlist. From warm-up to cool-down, these tracks will keep you motivated. | | Cinematic Instrumentals | Immerse yourself in a world of atmospheric sounds with this collection of cinematic instrumental tracks, perfect for focus, relaxation, or contemplation. |
by Yulia
This workflow is a modification of the previous template on how to create an SQL agent with LangChain and SQLite. The key difference – the agent has access only to the database schema, not to the actual data. To achieve this, SQL queries are made outside the AI Agent node, and the results are never passed back to the agent. This approach allows the agent to generate SQL queries based on the structure of tables and their relationships, without having to access the actual data. This makes the process more secure and efficient, especially in cases where data confidentiality is crucial. 🚀 Setup To get started with this workflow, you’ll need to set up a free MySQL server and import your database (check Step 1 and 2 in this tutorial). Of course, you can switch MySQL to another SQL database such as PostgreSQL, the principle remains the same. The key is to download the schema once and save it locally to avoid repeated remote connections. Run the top part of the workflow once to download and store the MySQL chinook database schema file on the server. With this approach, we avoid the need to repeatedly connect to a remote db4free database and fetch the schema every time. As a result, we reach greater processing speed and efficiency. 🗣️ Chat with your data Start a chat: send a message in the chat window. The workflow loads the locally saved MySQL database schema, without having the ability to touch the actual data. The file contains the full structure of your MySQL database for analysis. The Langchain AI Agent receives the schema, your input and begins to work. The AI Agent generates SQL queries and brief comments based solely on the schema and the user’s message. An IF node checks whether the AI Agent has generated a query. When: Yes: the AI Agent passes the SQL query to the next MySQL node for execution. No: You get a direct answer from the Agent without further action. The workflow formats the results of the SQL query, ensuring they are convenient to read and easy to understand. Once formatted, you get both the Agent answer and the query result in the chat window. 🌟 Example queries Try these sample queries to see the schema-driven AI Agent in action: Would you please list me all customers from Germany? What are the music genres in the database? What tables are available in the database? Please describe the relationships between tables. - In this example, the AI Agent does not need to create the SQL query. And if you prefer to keep the data private, you can manually execute the generated SQL query in your own environment using any database client or tool you trust 🗄️ 💭 The AI Agent memory node does not store the actual data as we run SQL-queries outside the agent. It contains the database schema, user questions and the initial Agent reply. Actual SQL query results are passed to the chat window, but the values are not stored in the Agent memory.
by Darryn Balanco
This workflow is designed to automate task reminders by retrieving tasks from a Notion database and sending reminders to Slack users. It checks for incomplete tasks from a Notion database and sends a Slack message to the relevant users with the task details and due dates. The automation is scheduled to run every weekday at 9:00 AM, ensuring that users are always reminded of pending tasks. Who is this for? This workflow is ideal for teams or individuals who manage their tasks using Notion but rely on Slack for communication. It provides an automated solution for ensuring that tasks in Notion are followed up on, reducing the risk of missing deadlines. What problem is this workflow solving? Often, team members need to be reminded of tasks from various platforms. This workflow bridges the gap between task management in Notion and communication in Slack by automatically sending task reminders. It ensures that team members are informed of their pending tasks each morning, helping them stay organized and on top of their work. What this workflow does Triggers every weekday at 9:00 AM: The workflow runs at 9:00 AM, Monday through Friday. Fetches tasks from Notion: It retrieves tasks from a Notion database. Filters incomplete tasks: The workflow filters tasks that are not marked as "Done." Fetches Slack users: It retrieves all Slack users to ensure that the reminders are sent to the correct user. Matches tasks to the correct user: It checks the Notion task assignee and matches it with the appropriate Slack user. Sends Slack reminders: Sends a Slack direct message to each user with their incomplete tasks and due dates. Setup Connect Notion: You will need to connect your Notion account and specify the database containing tasks. Connect Slack: Authenticate with Slack using OAuth to allow the workflow to send messages on your behalf. Notion user email mapping: Ensure that the Notion users’ email addresses are correctly mapped to their corresponding Notion user profiles. Slack user full name mapping: Ensure that the Slack users’ full names are correctly mapped to their corresponding Slack user profiles. Adjust schedule: If needed, modify the schedule node to run at a different time or frequency. How to customize this workflow Change the database**: You can adjust the workflow to pull tasks from a different Notion database by modifying the "Get To Dos from Tasks Database" node. Add more users**: The workflow currently supports two users, but you can expand it to support more by adding additional logic in the "Switch for Notion Users Emails" node. Modify the message format**: The Slack message content can be customized further to include more task details or change the message format. Workflow Summary This workflow automates sending task reminders from a Notion database to Slack users. By running every weekday morning, it ensures that users receive timely reminders of their incomplete tasks, helping them stay organized and efficient.