by James Carter
This n8n workflow automatically fetches trending news articles based on your chosen country, category, and keyword — then enriches the data with AI-powered business insights before posting a concise summary to Slack. Ideal for sales teams, executives, marketers, or anyone who wants fast, actionable news briefings directly in their Slack workspace. ⸻ Who it’s for Executives, analysts, sales teams, or marketing professionals who want curated, AI-enhanced news summaries tailored to business opportunities, risks, and trends — delivered automatically to Slack. ⸻ How it works / What it does A Schedule Trigger runs on a daily, weekly, or custom frequency. It queries the NewsAPI to retrieve top headlines by country, category, or keyword. Headlines are formatted and enriched with your configured query context. The AI model (GPT-4) analyzes articles and summarizes key insights, categorizing them as Opportunities, Risks, or Trends. Finally, the summarized insights are posted directly into a Slack channel of your choice. ⸻ How to set up Set your schedule frequency in the Schedule Trigger node. Configure your preferred country, category, and keyword in the Inject Config node. Add your NewsAPI Key inside the Fetch News Articles node. Connect your Slack credentials in the Post to Slack node. Optional: Adjust the AI prompt for more tailored analysis. ⸻ Requirements A NewsAPI account to fetch headlines. An OpenAI API key for GPT-4 summarization. A Slack workspace and connected credentials via n8n. ⸻ How to customize the workflow Change the country, category, or keyword in the Inject Config to focus on specific markets or sectors. Adjust the AI prompt in the GPT node to prioritize certain insights like ESG factors, M&A activity, or market sentiment. Extend the workflow to log results to Google Sheets, email summaries, or send SMS alerts. Replace the Schedule Trigger with a Webhook if you want to trigger summaries on demand. This template is designed to be modular, making it easy to adapt for competitive intelligence, investment tracking, or industry news curation.
by n8n Team
This workflow digests mentions of n8n on Reddit that can be sent as an single email or Slack summary each week. We use OpenAI to classify if a specific Reddit post is really about n8n or not, and then the summarise it into a bullet point sentence. How it works Get posts from Reddit that might be about n8n; Filter for the most relevant posts (posted in last 7 days and more than 5 upvotes and is original content); Check if the post is actually about n8n; If it is, categorise with OpenAI. Bear in mind: Workflow only considers first 500 characters of each reddit post. So if n8n is mentioned after this amount, it won't register as being a post about n8n.io. Next steps Improve OpenAI Summary node prompt to return cleaner summaries; Extend to more platforms/sources - e.g. it would be really cool to monitor larger Slack communities in this way; Do some classification on type of user to highlight users likely to be in our ICP; Separate a list of data sources (reddit, twitter, slack, discord etc.), extract messages from there and have them go to a sub workflow for classification and summarisation.
by Lucas Peyrin
How it works Ever wonder how to make your workflows smarter? How to handle different types of data in different ways? This template is a hands-on tutorial that teaches you the three most fundamental nodes for controlling the flow of your automations: Merge, IF, and Switch. To make it easy to understand, we use a simple package sorting center analogy: Data Items** are packages on a conveyor belt. The Merge Node is where multiple conveyor belts combine into one. The IF Node is a simple sorting gate with two paths (e.g., "Fragile" or "Not Fragile"). The Switch Node is an advanced sorting machine that routes packages to many different destinations. This workflow takes you on a step-by-step journey through the sorting center: Creating Packages: Three different "packages" (two letters and one parcel) are created using Set nodes. Merging: The first Merge node combines all three packages onto a single conveyor belt so they can be processed together. Simple Sorting: An IF node checks if a package is fragile. If true, it's sent down one path; if false, it's sent down another. Re-Grouping: After being processed separately, another Merge node brings the packages back together. This "Split > Process > Merge" pattern is a critical concept in n8n! Advanced Sorting: A Switch node inspects each package's destination and routes it to the correct output (London, New York, Tokyo, or a Default bin). By the end, you'll see how all packages have been correctly sorted, and you'll have a solid understanding of how to build intelligent, branching logic in your own workflows. Set up steps Setup time: 0 minutes! This template is a self-contained tutorial and requires zero setup. There are no credentials or external services to configure. Simply click the "Execute Workflow" button. Follow the flow from left to right, clicking on each node to see its output and reading the detailed sticky notes to understand what's happening at each stage.
by Adrian Bent
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow scrapes job listings on indeed via Apify, automatically gets that dataset, extracts information about the listing filters jobs off relevance, finds a decision maker at the company and updates a database (google sheets) with that info for outreach. All you need to do is run Apify actor then the database will update with the processed data. Benefits: Complete Job search Automation - A webhook monitors the Apify actor which sends a integration and starts the process AI-Powered Filter - Uses ChatGPT to analyze content/context, identify company goals, and filters based on job description Smart Duplicate Prevention - Automatically tracks processed job listings in a database to avoid redundancy Multi-Platform Intelligence - Combines Indeed scraping, web research via Tavily, and enriches each listing Niche Focus - Process content from multiple niches 6 currently (hardcoded) but can be changed to fit other niches (just prompt the "job filter" node) How It Works: Indeed Job Discovery: Search and apply filter for relevant job listings, copy and use URL in Apify Uses Apify's Indeed job scraper to scrape job listings from the URL of interest Automatically scrapes the information, stores it in a dataset and initiates a integration Oncoming Data Processing: Loops over 500 items (can be changed) with a batch size of 55 items (can be changed) to avoid running into API timeouts. Multiple filters to ensure all fields are scrapped with our required metrics (website must exist and number of employees < 250) Duplicate job listings are removed from oncoming batch to be processed Job Analysis & Filter: An additional filter to remove any job listing from the oncoming batch if it already exists in the google sheets database Then all new job listings gets pasted to chatGPT which uses information about the job post/description to determine if it is relevant to us All relevant jobs get a new field "verdict" which is either true or false and we keep the ones where verdict is true Enrich & Update Database: Uses Tavily to search for a decision maker (doesn't always finds one) and populate a row in google sheet with information about the job listing, the company and a decision maker at that company. Waits for 1 minute and 30 seconds to avoid google sheets and chatGPT API timeouts then loops back to the next batch to start filtering again until all job listings are processed Required Google Sheets Database Setup: Before running this workflow, create a Google Sheets database with these exact column headers: Essential Columns: jobUrl - Unique identifier for job listings title - Position Title descriptionText - Description of job listing hiringDemand/isHighVolumeHiring - Are they hiring at high volume? hiringDemand/isUrgentHire - Are they hiring at high urgency? isRemote - Is this job remote? jobType/0 - Job type: In person, Remote, Part-time, etc. companyCeo/name - CEO name collected from Tavily's search icebreaker - Column for holding custom icebreakers for each job listing (Not completed in the workflow. I will upload another that does this called "Personalized IJSFE") scrapedCeo - CEO name collected from Apify Scraper email - Email listed on for job listing companyName - Name of company that posted the job companyDescription - Description of the company that posted the job companyLinks/corporateWebsite - Website of the company that posted the job companyNumEmployees - Number of employees the company listed that they have location/country - Location of where the job is to take place salary/salaryText - Salary on job listing Setup Instructions: Create a new Google Sheet with these column headers in the first row Name the sheet whatever you please Connect your Google Sheets OAuth credentials in n8n Update the document ID in the workflow nodes The merge logic relies on the id column to prevent duplicate processing, so this structure is essential for the workflow to function correctly. Feel free to reach out for additional help or clarification at my gmail: terflix45@gmail.com and I'll get back to you as soon as I can. Set Up Steps: Configure Apify Integration: Sign up for an Apify account and obtain API key Get indeed job scraper actor and use Apify's integration to send a HTTP request to your n8n webhook (if test URL doesn't work use production URL) Use Apify node with Resource: Dataset, Operation: Get items and use your Api key as your credentials Set Up AI Services: Add OpenAI API credentials for job filtering Add Tavily API credentials for company research Set up appropriate rate limiting for cost control Database Configuration: Create Google Sheets database with provided column structure Connect Google Sheets OAuth credentials Configure the merge logic for duplicate detection Content Filtering Setup: Customize the AI prompts for your specific niche, requirements or interest Adjust the filtering criteria to fit your needs
by Adam Janes
This workflow gives you the ability to reply to a long email with a voice note, rather than having to type everything out. ChatGPT will format your audio response and create an email draft for you. How it works When a new email arrives in your inbox, the workflow checks if it needs a response, and it it does, it sends a message to you on Telegram via a VoiceEmailer bot. When you reply to that message with an audio message, the second part of this workflow is triggered. It checks if the message is in the right format, transcribes the audio, and creates a draft response that shows up in the same email thread. Set up steps Add your credentials for Gmail and OpenAI Create an Telegram bot following the instructions here. Connect your telegram credentials so the workflow will use your bot. Turn on the workflow, and message the bot from your telegram. Find the Chat ID from the Executions tab of your workflow, and enter it in as a variable.
by Rizqi Pratama Ramadhani
Automated Financial Tracker: Telegram Invoices to Notion with AI Summaries & Reports Tired of manually logging every expense? Streamline your financial tracking with this powerful n8n workflow! Snap a photo of your invoice in Telegram, and let AI (powered by Google Gemini) automatically extract the details, record them in your Notion database, and even send you a quick summary. Plus, get scheduled weekly reports with charts to visualize your spending. Automate your finances, save time, and gain better insights with this easy-to-use template! Transform your expense tracking from a chore into an automated breeze. Try it out! Overview: This workflow revolutionizes how you track your finances by automating the entire process from invoice capture to reporting. Simply send a photo of an invoice or receipt to a designated Telegram chat, and this workflow will: Extract Data with AI: Utilize Google Gemini's capabilities to perform OCR on the image, understand the content, and extract key details like item name, quantity, price, total, date, and even attempt to categorize the expense. Store in Notion: Automatically log each extracted transaction into a structured Notion database. Instant Feedback: Send a summary of the processed transaction back to your Telegram chat. Scheduled Reporting: Generate and send a visual summary of your expenses (e.g., weekly spending by category) as a chart to your preferred Telegram chat or group. This workflow is perfect for individuals, freelancers, or small teams looking to effortlessly manage their expenses without manual data entry. Key Features & Benefits: Effortless Expense Logging:** Just send a picture – no more typing! AI-Powered Data Extraction:** Leverages Google Gemini for intelligent invoice processing. Centralized Data in Notion:** Keep all your financial records neatly organized in a Notion database. Automated Categorization:** AI helps in categorizing your expenses (e.g., Food & Beverage, Transportation). Instant Summaries:** Get immediate confirmation and a summary of what was recorded. Visual Reporting:** Receive scheduled charts (e.g., bar charts of spending by category) directly in Telegram. Customizable:** Easily adapt the workflow to your specific needs, categories, and reporting preferences. Time-Saving:** Drastically reduces the time spent on manual financial administration. How It Works (Workflow Breakdown): The workflow is divided into two main parts: Part 1: Real-time Invoice Processing & Logging (## Auto Notes Transaction with Telegram and Notion database) Telegram Trigger (Telegram Trigger | When recive photo): Activates when a new photo is sent to the configured Telegram chat. Get Photo Info (Get Info Photo from telegram chat): Retrieves the details of the received photo. Get Image Info (Get Image Info): Prepares the image data. AI Data Extraction (Google Gemini Chat Model & Basic LLM Chain): The image data is sent to the Google Gemini Chat Model. A specific prompt instructs the AI to extract details (date, ID, name, quantity, price, total, category, tax) in a JSON array format and provide a summary message. The categories include Food & Beverage, Transportation, Utilities, Shopping, Healthcare, Entertainment, Housing, and Education. Parse AI Output (Parse To your object | Table): Structures the AI's JSON output for easier handling. Split Transactions (Split Out | data transaction): If an invoice contains multiple items, this node splits them into individual records. Record to Notion (Record To Notion Database): Each transaction item is added as a new page/entry in your specified Notion database, mapping fields like Name, Quantity, Price, Total, Category, Date, and Tax. Send Telegram Summary (Sendback to chat and give summarize text): The summary message generated by the AI is sent back to the original Telegram chat. Part 2: Scheduled Financial Reporting (## Schedule report to send on chanel or private message) Schedule Trigger (Schedule Trigger | for send chart report): Runs at a predefined interval (e.g., every week) to generate reports. Get Recent Data from Notion (Get Recent Data from Notions): Fetches transaction data from the Notion database for a specific period (e.g., the past week). Summarize Data (Summarize Transaction Data): Aggregates the data, for example, by summing up the 'total' amount for each 'category'. Prepare Chart Data (Convert Data to JSON chart payload): Transforms the summarized data into a JSON format suitable for generating a chart (e.g., labels for categories, data for spending amounts). Generate Chart (Generate Chart): Uses the QuickChart node to create a visual chart (e.g., a bar chart) from the prepared data. Send Chart to Telegram (Send Chart Image to Group or Private Chat): Sends the generated chart image to a specified Telegram chat ID or group. Nodes Used (Key Nodes): Telegram Trigger & Telegram Node:** For receiving images and sending messages/images. Google Gemini Chat Model (Langchain):** For AI-powered OCR and data extraction from invoices. Basic LLM Chain (Langchain):** To interact with the language model using specific prompts. Output Parser Structured (Langchain):** To structure the output from the language model. Notion Node:** For reading from and writing to your Notion databases. Schedule Trigger:** To automate the reporting process. Summarize Node:** To aggregate data for reports. Code Node:** Used here to format data for the chart. QuickChart Node:** For generating charts. SplitOut Node:** To process multiple items from a single invoice. Setup Instructions: Credentials: Telegram: Create a Telegram bot and get its API token. You'll also need the Chat ID where you'll send invoices and where reports should be sent. Google Gemini (PaLM) API: You'll need an API key for Google Gemini. Notion: Create a Notion integration and get the API key. Create a Notion database with properties corresponding to the data you want to save (e.g., Name (Title), Quantity (Number), Price (Number), Total (Number), Category (Select), Date (Text or Date), Tax (Number)). Share this database with your Notion integration. Configure Telegram Trigger: Add your Telegram Bot API token. When you first activate the workflow or test the trigger, send /start to your bot in the chat you want to use for sending invoices. n8n will then capture the Chat ID. Configure Google Gemini Node (Google Gemini Chat Model): Select or add your Google Gemini API credentials. Review the prompt in the Basic LLM Chain node and adjust if necessary (e.g., date format, categories). Configure Notion Nodes: Record To Notion Database: Select or add your Notion API credentials. Select your target Notion Database ID. Map the properties from the workflow (e.g., ={{ $json.name }}) to your Notion database columns. Get Recent Data from Notions: Select or add your Notion API credentials. Select your target Notion Database ID. Adjust the filter if needed (default is "past_week"). Configure Telegram Node for Reports (Send Chart Image to Group or Private Chat): Select or add your Telegram Bot API token. Enter the Chat ID for the group or private chat where you want to receive the reports. Configure Schedule Trigger (Schedule Trigger | for send chart report): Set your desired schedule (e.g., every Monday at 9 AM). Test: Send an image of an invoice to your Telegram bot and check if the data appears in Notion and if you receive a summary message. Wait for the scheduled report or manually trigger it to test the reporting functionality. Sticky Note Text for Your n8n Template: (These are suggestions. You would place these directly into the sticky notes within your n8n workflow editor.) Existing High-Level Sticky Notes: ## Auto Notes Transaction with Telegram and Notion database ## Schedule report to send on chanel or private message Specific Sticky Notes to Add: On Telegram Trigger | When recive photo:** 📸 INVOICE INPUT 📸 Bot listens here for photos of your receipts/invoices. Ensure your Telegram Bot API token is set in credentials. Near Google Gemini Chat Model & Basic LLM Chain:** 🤖 AI MAGIC HAPPENS HERE 🧠 Image is sent to Google Gemini for data extraction. Check 'Basic LLM Chain' to customize the AI prompt (e.g., categories, output format). Requires Google Gemini API credentials. On Parse To your object | Table:** ✨ STRUCTURING AI DATA ✨ Converts the AI's text output into a usable JSON object. Check the schema if you modify the AI prompt significantly. On Record To Notion Database:** 📝 SAVING TO NOTION 📝 Extracted transaction data is saved here. Configure with your Notion API key & Database ID. Map fields correctly to your database columns! On Sendback to chat and give summarize text:** 💬 TRANSACTION SUMMARY 💬 Sends a confirmation message back to the user in Telegram with a summary of the recorded expense. On Schedule Trigger | for send chart report:** 🗓️ REPORTING SCHEDULE 🗓️ Set how often you want to receive your spending report (e.g., weekly, monthly). On Get Recent Data from Notions:** 📊 FETCHING DATA FOR REPORT 📊 Retrieves transactions from Notion for the report period. Default: "Past Week". Adjust filter as needed. Requires Notion API credentials & Database ID. On Summarize Transaction Data:** ➕ SUMMARIZING SPENDING ➕ Aggregates your expenses, usually by category, to prepare for the chart. On Convert Data to JSON chart payload (Code Node):** 🎨 PREPARING CHART DATA 🎨 This Code node formats the summarized data into the JSON structure needed by QuickChart. On Generate Chart (QuickChart Node):** 📈 GENERATING VISUAL REPORT 📈 Creates the actual chart image based on your spending data. You can customize chart type (bar, pie, etc.) here. On Send Chart Image to Group or Private Chat:** 📤 SENDING REPORT TO TELEGRAM 📤 Delivers the generated chart to your chosen Telegram chat/group. Set the correct Chat ID and Bot API token. General Sticky Note (Place where relevant):** 🔑 CREDENTIALS NEEDED 🔑 Remember to set up API keys/tokens for: Telegram Google Gemini Notion General Sticky Note (Place where relevant):** 💡 CUSTOMIZE ME! 💡 Adjust AI prompts for better accuracy. Change Notion database structure. Modify report frequency and content. `
by Davide
This workflow allows users to generate AI videos using the cheaper model Google Veo3 Fast, save them to Google Drive, generate optimized titles with GPT-4o, and automatically upload them to YouTube and TikTok with Upload-Post. The entire process is triggered from a Google Sheet that acts as the central interface for input and output. IT automates video creation, uploading, and tracking, ensuring seamless integration between Google Sheets, Google Drive, Google Veo3 Fast, TikTok and YouTube. Benefits of this Workflow 💡 No Code Interface**: Trigger and control the video production pipeline from a simple Google Sheet. ⚙️ Full Automation**: Once set up, the entire video generation and publishing process runs hands-free. 🧠 AI-Powered Creativity**: Generates engaging YouTube and TikTok titles using GPT-4o. Leverages advanced generative video AI from Google Veo3. 📁 Cloud Storage & Backup**: Stores all generated videos on Google Drive for safekeeping. 📈 YouTube Ready**: Automatically uploads to YouTube with correct metadata, saving time and boosting visibility. 📈 TikTok Ready**: Automatically uploads to TikTok with correct metadata, saving time and boosting visibility. 🧪 Scalable**: Designed to process multiple video prompts by looping through new entries in Google Sheets. 🔒 API-First**: Utilizes secure API-based communication for all services. How It Works Trigger: The workflow can be started manually ("When clicking ‘Test workflow’") or scheduled ("Schedule Trigger") to run at regular intervals (e.g., every 5 minutes). Fetch Data: The "Get new video" node retrieves unfilled video requests from a Google Sheet (rows where the "VIDEO" column is empty). Video Creation: The "Set data" node formats the prompt and duration from the Google Sheet. The "Create Video" node sends a request to the Fal.run API (Google Veo3 Fast) to generate a video based on the prompt. Status Check: The "Wait 60 sec." node pauses execution for 60 seconds. The "Get status" node checks the video generation status. If the status is "COMPLETED," the workflow proceeds; otherwise, it waits again. Video Processing: The "Get Url Video" node fetches the video URL. The "Generate title" node uses OpenAI (GPT-4.1) to create an SEO-optimized YouTube and TikTok title. The "Get File Video" node downloads the video file. Upload & Update: The "Upload Video" node saves the video to Google Drive. The "HTTP Request" node uploads the video to YouTube via the Upload-Post API. The "HTTP Request" node uploads the video to TikTok via the Upload-Post API. The "Update Youtube URL" and "Update result" nodes update the Google Sheet with the video URL and YouTube link. Set Up Steps Google Sheet Setup: Create a Google Sheet with columns: PROMPT, DURATION, VIDEO, and YOUTUBE_URL. Share the Sheet link in the "Get new video" node. API Keys: Obtain a Fal.run API key (for Veo3) and set it in the "Create Video" node (Header: Authorization: Key YOURAPIKEY). Get an Upload-Post API key (for YouTube uploads) and configure the "HTTP Request" node (Header: Authorization: Apikey YOUR_API_KEY). Get an Upload-Post API key (for TikTok uploads) and configure the "HTTP Request" node (Header: Authorization: Apikey YOUR_API_KEY). YouTube Upload Configuration: Replace YOUR_USERNAME in the "HTTP Request" node with your Upload-Post profile name. Schedule Trigger: Configure the "Schedule Trigger" node to run periodically (e.g., every 5 minutes). Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Julian Kaiser
How it works Many users have asked in the support forum about different methods to analyze images and PDF documents with Google Gemini AI in n8n. This workflow answers that question by demonstrating five different approaches: Single image with auto binary passthrough - The simplest approach using AI Agent's automatic binary handling Multiple images with predefined prompts - For customized analysis with different instructions per image Native n8n item-by-item processing - For handling multiple items using n8n's standard workflow paradigm PDF analysis via direct API - For document analysis and text extraction Image analysis via direct API - For direct control over API parameters Each method has advantages depending on your specific use case, data volume, and customization needs. Set up steps Setup time: ~5-10 minutes You'll need: A Google Gemini API key n8n with HTTP Request and AI Agent nodes Important: For the HTTP Request nodes making direct API calls to Gemini (Methods 3, 4, and 5), you'll need to set up Query Authentication with your Gemini API key. Add a parameter named "key" with your API key value in the Query Auth section of these nodes. I'll updated this if I find better ways. Also let me know if you know other ways. Eager to learn :)
by Muhammad Ashar
How It Works – Your AI Marketing Team in Action This automation acts as your AI-powered content and image marketing assistant inside Telegram. With just a voice note or text message, it can: 🧠 Understand your request – Whether you send a message or speak into Telegram, it transcribes and processes your input using GPT-4. 🎨 Create and edit content – Based on what you say, it can generate: ✍️ Blog posts 💼 LinkedIn posts 🎬 Faceless videos 🖼️ AI-generated images 🪄 Edits to existing images 🔎 Searches through your image database 💬 Replies directly in Telegram – It sends you back the result—whether that’s a post, image, or video link—without leaving the app. 🧩 Built using LangChain agent logic – It intelligently chooses the right tool from a suite of sub-workflows like "Create Image", "Blog Post", or "Video" using agent reasoning. 🛠️ Setup Steps – Get Started in Minutes! ⌛ Time Estimate: ~15–30 minutes (faster if you're familiar with n8n) 🔗 1. Import the Template Pack 📥 Download and install these workflows into your n8n: Create Image, Edit Image, Search Images Blog Post, LinkedIn Post, Video 🔐 2. Add Required Credentials Telegram Bot 🤖 OpenRouter AI 🧠 Tavily API (for smart research) 📚 ElevenLabs 🎙️ (for voice in videos) PiAPI & Runway 🎞️ (for faceless videos) 🧩 3. Link the Tools to the Agent Node – Make sure the "Marketing Team Agent" is connected to each of the content creation tools as shown in the workflow. 📎 4. Download Templates & Logs 🧾 Google Sheets Log Template (to track output) 🖼️ Creatomate Template (optional for enhanced image control – shared in Skool group) 📌 Pro Tip: All detailed step-by-step setup instructions are included as sticky notes inside the n8n canvas. Just follow along!
by Einar César Santos
🧠 Long-Term Memory System for AI Agents with Vector Database Transform your AI assistants into intelligent agents with persistent memory capabilities. This production-ready workflow implements a sophisticated long-term memory system using vector databases, enabling AI agents to remember conversations, user preferences, and contextual information across unlimited sessions. 🎯 What This Template Does This workflow creates an AI assistant that never forgets. Unlike traditional chatbots that lose context after each session, this implementation uses vector database technology to store and retrieve conversation history semantically, providing truly persistent memory for your AI agents. 🔑 Key Features Persistent Context Storage**: Automatically stores all conversations in a vector database for permanent retrieval Semantic Memory Search**: Uses advanced embedding models to find relevant past interactions based on meaning, not just keywords Intelligent Reranking**: Employs Cohere's reranking model to ensure the most relevant memories are used for context Structured Data Management**: Formats and stores conversations with metadata for optimal retrieval Scalable Architecture**: Handles unlimited conversations and users with consistent performance No Context Window Limitations**: Effectively bypasses LLM token limits through intelligent retrieval 💡 Use Cases Customer Support Bots**: Remember customer history, preferences, and previous issues Personal AI Assistants**: Maintain user preferences and conversation continuity over months or years Knowledge Management Systems**: Build accumulated knowledge bases from user interactions Educational Tutors**: Track student progress and adapt teaching based on history Enterprise Chatbots**: Maintain context across departments and long-term projects 🛠️ How It Works User Input: Receives messages through n8n's chat interface Memory Retrieval: Searches vector database for relevant past conversations Context Integration: AI agent uses retrieved memories to generate contextual responses Response Generation: Creates informed responses based on historical context Memory Storage: Stores new conversation data for future retrieval 📋 Requirements OpenAI API Key**: For embeddings and chat completions Qdrant Instance**: Cloud or self-hosted vector database Cohere API Key**: Optional, for enhanced retrieval accuracy n8n Instance**: Version 1.0+ with LangChain nodes 🚀 Quick Setup Import this workflow into your n8n instance Configure credentials for OpenAI, Qdrant, and Cohere Create a Qdrant collection named 'ltm' with 1024 dimensions Activate the workflow and start chatting! 📊 Performance Metrics Response Time**: 2-3 seconds average Memory Recall Accuracy**: 95%+ Token Usage**: 50-70% reduction compared to full context inclusion Scalability**: Tested with 100k+ stored conversations 💰 Cost Optimization Uses GPT-4o-mini for optimal cost/performance balance Implements efficient chunking strategies to minimize embedding costs Reranking can be disabled to save on Cohere API costs Average cost: ~$0.01 per conversation 📖 Learn More For a detailed explanation of the architecture and implementation details, check out the comprehensive guide: Long-Term Memory for LLMs using Vector Store - A Practical Approach with n8n and Qdrant 🤝 Support Documentation**: Full setup guide in the article above Community**: Share your experiences and get help in n8n community forums Issues**: Report bugs or request features on the workflow page Tags: #AI #LangChain #VectorDatabase #LongTermMemory #RAG #OpenAI #Qdrant #ChatBot #MemorySystem #ArtificialIntelligence
by Trung Tran
📒 Telegram Expense Tracker to Google Sheets with GPT-4.1 👤 Who’s it for This workflow is for anyone who wants to log their daily expenses by simply chatting with a Telegram bot. Ideal for: Individuals who want a quick way to track spending Freelancers who log receipts and purchases on the go Teams or small business owners who want lightweight expense capture ⚙️ How it works / What it does User sends a text message on Telegram describing an expense (e.g., “Bought coffee for 50k at Highlands”) Message format is validated If the message is text, it proceeds to GPT-4.1 Mini for processing. If it's not text (e.g. image or file), the bot sends a fallback message. OpenAI GPT-4.1 Mini parses the message and returns: relevant: true/false expense_record: structured fields (date, amount, currency, category, description, source) message: a friendly confirmation or fallback If valid: The bot replies with a fun acknowledgment The data is saved to a connected Google Sheet If invalid: A fallback message is sent to encourage proper input 🛠️ How to set up 1. Telegram Bot Setup Create a bot using BotFather on Telegram Copy the bot token and paste it into the Telegram Trigger node 2. Google Sheet Setup Create a Google Sheet with these columns: Date | Amount | Currency | Category | Description | SourceMessage Share the sheet with your n8n service account email 3. OpenAI Configuration Connect the OpenAI Chat Model node using your OpenAI API key Use GPT-4.1 Mini as the model Apply a system prompt that extracts structured JSON with: relevant, expense_record, and message 4. Add Parser Use the Structured Output Parser node to safely parse the JSON response 5. Conditional Logic Nodes Is text message? Checks if the message is in text format Supported scenario? Checks if relevant = true in the LLM response 6. Final Actions If relevant**: Send confirmation via Telegram Append row to Google Sheet If not relevant**: Send fallback message via Telegram ✅ Requirements Telegram bot token OpenAI GPT-4.1 Mini API access n8n instance (self-hosted or cloud) Google Sheet with access granted to n8n Basic understanding of n8n node configuration 🧩 How to customize the workflow | Feature | How to Customize | |----------------------------------|-------------------------------------------------------------------| | Add multi-currency support | Update system prompt to detect and extract different currencies | | Add more categories | Modify the list of categories in the system prompt | | Track multiple users | Add username or chat ID column to the Google Sheet | | Trigger alerts | Add Slack, Email, or Telegram alerts for specific expense types | | Weekly summaries | Use a cron node + Google Sheet query + Telegram message | | Visual dashboards | Connect the sheet to Looker Studio or Google Data Studio | Built with 💬 Telegram + 🧠 GPT-4.1 Mini + 📊 Google Sheets + ⚡ n8n
by Jason Krol
Notion Weekly Journal AI Summary This workflow will run on a weekly schedule and retrieve your Notion Daily Journal pages for the past week and aggregate them into a ChatGPT generated concise summary. It will save that weekly summary back to your Notion as a new Note in addition to posting to a personal Discord channel. Additionally it will also retrieve all of the Tasks you've completed in the past week and provide a quick total with a congratulatory message to a Discord channel as well. Requirements/Setup: You need Notion setup w/ a Notes database If you want the Discord messages, setup a Discord webhook for your channel as well, or simply delete the Discord nodes. One of the properties for the Notes db should be Type with a value of Journal The contents of your daily Journal pages can be whatever you want I've found what works best for me is the format of "What was a highlight of the day?", "What was a low point of the day?", and "What decisions did I delegate, delay, or dodge?" You should also create an additional Type for your Weekly summary page that gets created - in this case I used simply Weekly Automate this to run weekly on your day of choice. I tend to only journal on weekdays so I've set mine up to run every Friday retrieving the past week's Journal entries. Options: You don't have to use Discord, feel free to swap out with Slack or remove altogether. You also don't need to use the Tasks summary bottom half, simply remove that if you don't want it or need it. You can easily reuse this workflow to aggregate your Weekly Summary notes (that this workflow auto generates/saves) to generate a Quarterly or even Yearly summary!