by n8n Team
This workflow sends a OpenAI GPT reply when an email is received from specific email recipients. It then saves the initial email and the GPT response to an automatically generated Google spreadsheet. Subsequent GPT responses will be added to the same spreadsheet. Additionally, when feedback is given for any of the GPT responses, it will be recorded to the spreasheet, which can then be used later to fine-tune the GPT model. Prerequisites OpenAI credentials Google credentials How it works This workflow is essentially a two-in-one workflow. It triggers off from two different nodes and have very different functionality from each trigger. The flow triggered from On email received node is as follows: Triggers off on the On email received node. Extract the email body from the email. Generate a response from the email body using the OpenAI node. Reply to the email sender using the Send reply to recipient node. A feedback link is also included in the email body which will trigger the On feedback given node. This is used to fine-tune the GPT model. Save the email body and OpenAI response to a Google Sheet. If a sheet does not exist, it will be created. The flow triggered from On feedback given node is as follows: Triggers off when a feedback link is clicked in the emailed GPT response. The feedback, either positive or negative, for that specific GPT response is then recorded to the Google Sheet.
by Ludwig
How it works This workflow enables companies to provide instant HR support by automating responses to employee queries about policies and benefits: Retrieves company policies, benefits, and HR documents from BambooHR. Uses AI to analyze and answer employee questions based on company records. Identifies the most relevant contact person for escalations. Seamlessly integrates with company systems to provide real-time HR assistance. Set up steps: Estimated time: ~20 minutes Connect your BambooHR account to allow policy retrieval. Configure AI parameters and access control settings. (Optional) Set up the employee lookup tool for personalized responses. Test the chatbot to ensure accurate responses and seamless integration. Benefits This workflow is perfect for HR teams looking to enhance employee support while reducing manual inquiries. Outperform BambooHR's "Ask BambooHR" Chatbot #1. Superior specificity of replies to general inquiries #2. More appropriate escalations when responding to sensitive employee concerns
by Iniyavan JC
How it works This workflow creates a multi-talented AI assistant named Simran that interacts with users via Telegram. It can handle text and voice messages, understand the user's intent, and perform various tasks. Step 1: Receive & Transcribe Input The workflow triggers on any new Telegram message. If it's a voice message, it uses AssemblyAI to transcribe it into text; otherwise, it processes the incoming text directly. Step 2: Understand User Intent Using a Large Language Model (LLM), the workflow analyzes the user's message to determine their goal, categorizing it as a general chat, a request to generate an image, a command to set a reminder, or a request to remember a specific piece of information. Step 3: Fetch Context & Route The assistant retrieves past conversation summaries from a MongoDB database to maintain context. Based on the user's intent, the workflow routes the task to the appropriate path. Step 4: Execute the Task Chat: Generates a response using an AI agent whose personality can be toggled between a standard assistant and a "Girlfriend Mode." It also analyzes the user's mood to tailor the response. Generate Image: Creates a detailed prompt and uses an image generation API to create and send a picture. Set Reminder: Parses the natural language request, creates an event in Google Calendar and a task in Google Tasks, and sends a confirmation. Remember Info: Saves specific user-provided information to a dedicated memory collection in MongoDB. Step 5: Respond and Save Memory The final output (text, voice message, or image) is sent back to the user on Telegram. The workflow then summarizes the interaction and saves it to the database to ensure continuity in future conversations. Set up steps Estimated Set up time: 20 - 30 minutes. Configure Credentials: You will need to add credentials for several services in your n8n instance: Telegram (Bot API Token) AssemblyAI (API Key) MongoDB Google (for Calendar, Tasks, Sheets, and Natural Language API) A Large Language Model (the workflow uses Google Gemini but can be adapted) An image generation service (the workflow uses the Together.xyz API) Set up External Services: Ensure your MongoDB instance has two collections: user_memory and memory_auto. Create a Google Sheet to manage the "Girlfriend Mode" status for different users. Ensure edge-tts is installed on the machine running your n8n instance for the text-to-speech functionality. Customize Nodes: Review the nodes with hardcoded IDs, such as Google Tasks and Google Sheets, and update them with your specific Task List ID and Sheet ID. The sticky notes inside the workflow provide more detailed instructions for specific nodes and segments.
by Rod
Telegram Personal Assistant with Long-Term Memory & Note-Taking This n8n workflow transforms your Telegram bot into a powerful personal assistant that handles voice, photo, and text messages. The assistant uses AI to interpret messages, save important details as long-term memories or notes in a Baserow database, and recall information for future interactions. 🌟 How It Works Message Reception & Routing Telegram Integration: The workflow is triggered by incoming messages on your Telegram bot. Dynamic Routing: A switch node inspects the message to determine whether it's voice, text, or photo (with captions) and routes it for the appropriate processing. Content Processing Voice Messages: Audio files are retrieved and sent to an AI transcription node to convert spoken words into text. Text Messages: Text is directly captured and prepared for analysis. Photos: If an image is received, the bot fetches the file (and caption, if provided) and uses an AI-powered image analysis node to extract relevant details. AI-Powered Agent & Memory Management The core AI agent (powered by GPT-4o-mini) processes the incoming message along with any previous conversation history stored in PostgreSQL memory buffers. Long-Term Memory: When a message contains personal or noteworthy information, the assistant uses a dedicated tool to save this data as a long-term memory in Baserow. Note-Taking: For specific instructions or reminders, the assistant saves concise notes in a separate Baserow table. The AI agent follows defined rules to decide which details are saved as memories and which are saved as notes. Response Generation After processing the message and updating memory/notes as needed, the AI agent crafts a contextual and personalized response. The response is sent back to the user via Telegram, ensuring smooth and natural conversation flow. 🚀 Key Features Multimodal Input:** Seamlessly handles voice, photo (with captions), and text messages. Long-Term Memory & Note-Taking:** Uses a Baserow database to store personal details and notes, enhancing conversational context over time. AI-Driven Contextual Responses:** Leverages an AI agent to generate personalized, context-aware replies based on current input and past interactions. User Security & Validation:** Incorporates validation steps to verify the user's Telegram ID before processing, ensuring secure and personalized interactions. Easy Baserow Setup:** Comes with a clear setup guide and sample configurations to quickly integrate Baserow for managing memories and notes. 🔧 Setup Guide Telegram Bot Setup: Create your bot via BotFather and obtain the Bot Token. Configure the Telegram webhook in n8n with your bot's token and URL. Baserow Database Configuration: Memory Table: Create a workspace titled "Memories and Notes". Set up a table (e.g., "Memory Table") with at least two fields: Memory (long text) Date Added (US date format with time) Notes Table: Duplicate the Memory Table and rename it to "Notes Table". Change the first field's name from "Memory" to "Notes". n8n Workflow Import & Configuration: Import the workflow JSON into your n8n instance. Update credentials for Telegram, Baserow, OpenAI, and PostgreSQL (for memory buffering) as needed. Adjust node settings if you need to customize AI agent prompts or memory management rules. Testing & Deployment: Test your bot by sending various message types (text, voice, photo) to confirm that the workflow processes them correctly, updates Baserow, and returns the appropriate response. Monitor logs to ensure that memory and note entries are correctly stored and retrieved. ✨ Example Interactions Voice Message Processing:** User sends a voice note requesting a reminder. Bot Response: "Thanks for your message! I've noted your reminder and saved it for future reference." Photo with Caption:** User sends a photo with the caption "Save this recipe for dinner ideas." Bot Response: "Got it! I've saved this recipe along with the caption for you." Text Message for Memory Saving:** User: "I love hiking on weekends." Bot Response: "Noted! I’ll remember your interest in hiking." Retrieving Information:** User asks: "What notes do I have?" Bot Response: "Here are your latest notes: [list of saved notes]." 🛠️ Resources & Next Steps Telegram Bot Configuration:** Telegram BotFather Guide n8n Documentation:** n8n Docs Community Forums:** Join discussions and share your customizations! This workflow not only streamlines message processing but also empowers users with a personal AI assistant that remembers details over time. Customize the rules and responses further to fit your unique requirements and enjoy a more engaging, intelligent conversation experience on Telegram!
by LukaszB
AI Twitter Content Machine – Write, Refine & Publish Tweets on Autopilot This workflow is perfect for creators, solopreneurs, and personal brands who want to consistently publish bold, high-performing content on X (Twitter) — without writing a single line themselves. After a one-time setup, it automatically generates tweet ideas, writes in your voice, evaluates post quality, avoids duplicates, and publishes directly to Twitter. All approvals and rewrites are handled in a conversational loop powered by OpenAI, Discord, and Google Sheets. Whether you’re building a personal brand or growing your startup audience, this tool will help you stay active, edgy, and relevant — with zero friction. How it works Distill what your flow does in a few high-level steps. Loads your brand brief from a sub-workflow. Generates a tweet idea aligned with your tone. Checks Google Sheets to ensure the idea hasn’t been used. Writes the post. Evaluates it using a feedback sub-workflow — if the quality score is below 0.7, it rewrites the post. Refines tone and voice using a Rewriter Agent that mimics your past content (from a Google Sheet). Sends the final post to a Discord channel for manual approval. On approval, posts directly to Twitter (X) and logs it to Google Sheets (History and Examples tabs). Set up steps Give users an idea of how long setup will take. Don’t describe every detail. Keep detailed descriptions in sticky notes inside your workflow. Key benefits No burnout, no block – Stop spending energy thinking what to tweet. AI handles everything. Style-matching – Posts sound like you, not a generic robot. Based on your real writing. Fast & scalable – Publish once or five times a day — it’s up to you. Avoid duplicates – Each idea is checked against your post history. Human-in-the-loop – You approve final posts via Discord. No rogue tweets. Integrations required n8n OpenAI API Google Sheets Twitter (OAuth2) Discord (for approval) Notion (optional for brand brief storage)
by Andrey
⚠️ DISCLAIMER: This workflow uses the HDW LinkedIn community node, which is only available on self-hosted n8n instances. It will not work on n8n.cloud. Overview This workflow automates the entire LinkedIn lead generation process from finding prospects that match your Ideal Customer Profile (ICP) to sending personalized messages. It uses AI to analyze lead data, score potential clients, and prioritize your outreach efforts. Key Features AI-Driven Lead Generation**: Convert ICP descriptions into LinkedIn search parameters Comprehensive Data Enrichment**: Analyze company websites, LinkedIn posts, and news Intelligent Lead Scoring**: Prioritize leads based on AI analysis of intent signals Automated Outreach**: Connect with prospects and send personalized messages Requirements Self-hosted n8n instance with the HDW LinkedIn community node installed OpenAI API access (for GPT-4o) Google Sheets access HDW API key (available at app.horizondatawave.ai) LinkedIn account Setup Instructions 1. Install Required Nodes Ensure the HDW LinkedIn community node is installed on your n8n instance Command: npm install n8n-nodes-hdw (or use this instruction) 2. Configure Credentials OpenAI**: Add your OpenAI API key Google Sheets**: Set up Google account access HDW LinkedIn**: Configure your API key from horizondatawave.ai 3. Set Up Google Sheet Create a new Google Sheet with the following columns (or copy template): Name, URN, URL, Headline, Location, Current company, Industry, etc. The workflow will populate these columns automatically 4. Customize Your ICP Use chat to provide the AI Agent with your Ideal Customer Profile Example: "Target marketing directors at SaaS companies with 50-200 employees" 5. Adjust Scoring Criteria Modify the lead scoring prompt in the "Company Score Analysis" node to match your specific product/service Tune the evaluation criteria based on your unique business needs 6. Configure Message Templates Update the HDW LinkedIn Send Message node with your custom message How It Works ICP Translation: AI converts your ICP description into LinkedIn search parameters Lead Discovery: Workflow searches LinkedIn using these parameters Data Collection: Results are saved to Google Sheets Enrichment: System collects additional data about each lead: Company website analysis Lead's LinkedIn posts Company's LinkedIn posts Recent company news Intent Analysis: AI analyzes all data to identify buying signals Lead Scoring: Leads are scored on a 1-10 scale based on likelihood of interest Connection Requests: Top-scoring leads receive connection requests Follow-Up: When connections are accepted, automated messages are sent Customization Search Parameters**: Adjust the AI Agent prompt to refine your target audience Scoring Criteria**: Modify scoring prompts to highlight indicators relevant to your product Message Content**: Update message templates for personalized outreach Schedule**: Configure when connection requests and messages are sent Rate Limits & Best Practices LinkedIn has connection request limits (approximately 100-200 per week) The workflow includes safeguards to avoid exceeding these limits Consider spacing your outreach for better response rates Note: Always use automation tools responsibly and in accordance with LinkedIn's terms of service.
by Jimleuk
This n8n template builds a simple automation to ensure no JIRA issues go unassigned for more than a week to prevent them falling through the cracks. It uses AI to perform searching tasks against a Supabase Vector Store. This can be one way to help reduce the amount of manual work in managing the issue backlog for busy teams with little effort. How it works This template contains 2 separate flows which run continuously via schedule triggers. The first populates our Supabase vector store with resolved issues within the last day. This helps keep our vector store up-to-date and relevant for the purpose of finding similar issues. It does this by pulling the latest resolved issues from JIRA and populating the Supabase vectorstore with carefully chosen metadata. This will come in handy later. The second flow watches for stale, unassigned issues for the purpose of aut-assigning to a relevant team member. It does this by comparing the stale issue against our vector store of resolved issues with the goal of identifying which team member would have best context regarding the issue. In a busy team, this may net a few team members as possible candidates to assign. Therefore, we can introduce additional logic to count each team member's assigned, in-progress issues. This is intended to not overload our busiest members. The team member with the least assigned issues is pressumed to have the most capacity and therefore is assigned. A comennt is left in the issue to notify the team member that they've been auto-assigned due to age of issue. How to use Modify the project and interval parameters to match those of your use-case and team members. Add additional criteria before assigning to a team member eg. department, as required. Requirements OpenAI for LLM JIRA for Issue Management Supabase for Vector Store Customising this workflow Not using JIRA or Supabase? The beauty of these AI templates are these components are entirely interchangeable with competing services. Try Linear and Qdrant instead! Auto-assigning logic is simplified in this template. Expand criteria as required for your team and organisation. eg. Might be a good idea to pull in annual leave information from HR system to prevent assigning to someone who is on currently on holiday!
by Davide
This workflow automates the creation and management of a Retrieval-Augmented Generation (RAG) system using Qdrant as a vector store and Google Drive as the document source. It enables full or incremental updates to documents in the Qdrant vector database and integrates with a chatbot using Google Gemini for question answering. Here is a clear and professional description in English of the n8n workflow “Create a RAG with Qdrant and update single files”, including its benefits: Benefits Efficient RAG Setup** Seamlessly integrates OpenAI, Qdrant, and Google Drive to create a scalable RAG pipeline. Single File Update** You can replace the vector representation of a single file without reprocessing the entire collection—ideal for maintaining document freshness. Flexible File Source** Works with Google Drive, allowing document management and updates from a familiar interface. How It Works This workflow is designed to create a Retrieval-Augmented Generation (RAG) system using Qdrant as a vector store and Google Drive as a document source. It consists of four main phases: Collection Setup**: Creates or clears a Qdrant collection to store vectorized documents. Configures the collection with cosine distance metrics and other parameters. Document Processing**: Retrieves files from a specified Google Drive folder. Downloads and processes each file (text extraction, chunking, and embedding using OpenAI). Stores the embeddings in Qdrant for vector search. Single-File Update**: Allows updating or deleting a specific file in the Qdrant collection by referencing its Google Drive ID. Re-embeds the file and updates the vector store. RAG Querying**: Uses a chat trigger to receive user questions. Retrieves relevant documents from Qdrant using vector similarity. Generates answers using Google Gemini as the language model. Set Up Steps Configure Qdrant: Replace QDRANTURL and COLLECTION in the "Create collection" and "Clear collection" HTTP nodes. Ensure Qdrant API credentials are correctly set in the credentials section. Google Drive Integration: Specify the Google Drive folder ID in the "Get files" node. Ensure Google Drive OAuth credentials are configured. OpenAI and Gemini Keys: Add OpenAI API credentials for embeddings (used in "Embeddings OpenAI" nodes). Configure Google Gemini credentials for the chat model. Single-File Update: Set the file_id in the "Edit Fields3" node to target a specific Google Drive file for updates. Testing: Trigger the workflow manually to populate the Qdrant collection. Use the chat interface to test RAG responses. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Tanay Agarwal
Who is this for? This workflow is ideal for HR teams, startups, and enterprises that want to handle employee interactions through WhatsApp and automate responses using LLM (OpenAI) and intelligent routing. What problem is this workflow solving? Managing WhatsApp messages manually can be time-consuming and error-prone. This workflow solves that by: Auto-classifying messages using LLM Routing them to the right AI-powered agent Automating leave approvals, attendance, HR FAQs, complaints, and candidate shortlisting Delivering final responses interactively via WhatsApp What this workflow does WhatsApp Trigger captures incoming messages LLM Classification analyzes message intent and outputs category (1–5) Switch Node routes the message to the correct agent: 1 → Leave Agent 2 → HR FAQ Chatbot 3 → Attendance Agent 4 → Complaint/Request Agent 5 → Shortlisting Agent Each agent performs specific tasks using tools like: Google Sheets (fetch dept head emails, JD/applicants, logs) Google Calendar (schedule meetings) Vector Search (for policy embeddings) OpenAI (transcription, classification, chatbot) Final WhatsApp Response node sends updates and interactive options to the user Setup Connect WhatsApp API (e.g., via Twilio or WhatsApp Business Cloud API) Configure OpenAI credentials Set up Google Sheets with: Employee data JD and applicants info Policy documents (for embedding) Prepare Google Calendar access Create a vector store with embedded company policy docs How to customize this workflow to your needs Update the LLM prompt to suit your company’s categories or expand to more intents Replace sample sheets with your organization’s actual data Train your own policy embeddings if needed Add/modify agents (e.g., Payroll Bot, IT Support Bot) by cloning an existing pattern Adjust the Switch Node if you add more classifications With this modular and intelligent setup, you can turn your WhatsApp into a smart HR & operations assistant powered by AI, accessible 24/7.
by Jimleuk
This n8n workflow automates the process of parsing and extracting data from PDF invoices. With this workflow, accounts and finance people can realise huge time and cost savings in their busy schedules. Read the Blog: https://blog.n8n.io/how-to-extract-data-from-pdf-to-excel-spreadsheet-advance-parsing-with-n8n-io-and-llamaparse/ How it works This workflow will watch an email inbox for incoming invoices from suppliers It will download the attached PDFs and processing them through a third party service called LlamaParse. LlamaParse is specifically designed to handle and convert complex PDF data structures such as tables to markdown. Markdown is easily to process for LLM models and so the data extraction by our AI agent is more accurate and reliable. The workflow exports the extracted data from the AI agent to Google Sheets once the job complete. Requirements The criteria of the email trigger must be configured to capture emails with attachments. The gmail label "invoice synced" must be created before using this workflow. A LlamaIndex.ai account to use the LlamaParse service. An OpenAI account to use GPT for AI work. Google Sheets to save the output of the data extraction process although this can be replaced for whatever your needs. Customizing this workflow This workflow uses Gmail and Google Sheets but these can easily be swapped out for equivalent services such as Outlook and Excel. Not using Excel? Simple redirect the output of the AI agent to your accounting software of choice.
by David Roberts
AI evaluation in n8n This is a template for n8n's evaluation feature. Evaluation is a technique for getting confidence that your AI workflow performs reliably, by running a test dataset containing different inputs through the workflow. By calculating a metric (score) for each input, you can see where the workflow is performing well and where it isn't. How it works This template shows how to calculate a workflow evaluation metric: retrieved document relevance (i.e. whether the information retrieved from a vector store is relevant to the question). The workflow takes a question and checks whether the information retrieved to answer it is relevant. To run this workflow, you need to insert documents into a vector data store, so that they can be retrieved by the agent to answer questions. You can do this by running the top part of the workflow once. The main workflow works as follows: We use an evaluation trigger to read in our dataset It is wired up in parallel with the regular trigger so that the workflow can be started from either one. More info We make sure that the agent outputs the list data from the tools that it used If we’re evaluating (i.e. the execution started from the evaluation trigger), we calculate the relevance metric using AI to compare the retrieved documents with the question We pass this information back to n8n as a metric If we’re not evaluating we avoid calculating the metric, to reduce cost
by Don Jayamaha Jr
Stay on top of the latest crypto news and market sentiment instantly, all inside Telegram! This workflow aggregates articles from the top crypto news sources, filters for your topic of interest, and summarizes key news and market sentiment using GPT-4o AI. Ideal for crypto traders, investors, analysts, and market watchers needing fast, intelligent news briefings. > 💬 Just type a coin name (e.g., "Bitcoin", "Solana", "DeFi") into your Telegram AI Agent—and get a smart news digest. How It Works Telegram Bot Trigger User sends a keyword (e.g., "Ethereum") of questions to the Telegram AI Agent. Keyword Extraction (AI-Powered) An AI agent identifies the main topic for better targeting. News Aggregation Pulls articles from 9 major crypto news RSS feeds: Cointelegraph Bitcoin Magazine CoinDesk Bitcoinist NewsBTC CryptoPotato 99Bitcoins CryptoBriefing Crypto.news Filtering Finds and merges articles relevant to the user's keyword. AI Summarization GPT-4o generates a 3-part summary: News Summary Market Sentiment Analysis List of Article Links Telegram Response Sends a structured, easy-to-read digest back to the user. 🔍 What You Can Do with This Workflow 🔹 Summarize breaking news for any crypto project or keyword 🔹 Monitor real-time market sentiment on Bitcoin, DeFi, NFTs, and more 🔹 Stay ahead of FUD, bullish trends, and major news events 🔹 Quickly brief yourself or your team via Telegram 🔹 Use it as a foundation for more advanced crypto alert bots ✅ Example User Inputs ✅ "Bitcoin" → Latest Bitcoin news and sentiment summary ✅ "Solana" → Updates on Solana projects, price movements, and community trends ✅ "NFT" → Aggregated news about NFT markets and launches ✅ "Layer 2" → Insights on Optimism, Arbitrum, and other L2s 🛠️ Setup Instructions Create a Telegram Bot Use @BotFather and obtain the Bot Token. Configure Telegram Credentials in n8n Add your bot token under Telegram API Credentials. Configure OpenAI API Add your OpenAI credentials for GPT-4o access. Update Telegram Send Node In the Telegram Send node, replace the placeholder chatId with your real Telegram user or group chat ID. Deploy and Test Start chatting with your bot: e.g., "Ethereum" or "DeFi". 📌 Workflow Highlights 9 major crypto news sources combined** Smart keyword matching** with AI query parsing Summarized insights** in human-readable format Reference links** included for deeper reading Instant delivery** via Telegram 🚀 Get ahead of the crypto market—automate your news and sentiment monitoring with AI inside Telegram!