by David Roberts
This AI agent can access data provided by another n8n workflow. Since that workflow can be used to retrieve any data from any service, this template can be used give an agent access to any data. Note that to use this template, you need to be on n8n version 1.19.4 or later.
by Alberto
PersonalNotesAssistant – Organize and Understand Your Thoughts with Local AI PersonalNotesAssistant is an offline-capable, AI-powered agent that helps you store, summarize, retrieve, and reflect on your personal notes and voice memos — all processed locally and sent via Telegram. Built to run efficiently on a Raspberry Pi 5, this agent supports a variety of note-taking styles and acts as your private memory extension. 🧠 What It Can Do Accept voice or text notes via Telegram Transcribe audio messages into clean, structured text (using Whisper) Automatically summarize or categorize notes with a local LLM Answer questions based on your past notes Retrieve relevant entries by topic, date, or keyword Help you journal or reflect by asking follow-up questions Work completely offline — no cloud or external APIs 🔧 How It Works Capture Notes via Telegram You send a voice message or text to your Telegram bot. The assistant supports both quick thoughts and long-form content. Transcription with Whisper (Local) If the input is a voice message, it is transcribed into text using Whisper running locally on your Raspberry Pi. AI Summarization & Tagging The transcribed or typed note is sent to LLaMA 3.2 via Ollama, which summarizes it, suggests tags, and stores it with metadata (e.g., timestamp, mood, theme). Storage & Retrieval Notes are stored in a local database (e.g., SQLite or JSON). You can later query the assistant with prompts like: “What did I say about stress last week?” “Summarize my ideas from this month.” “Show notes tagged with 'travel'.” Follow-Up & Reflection The agent can optionally engage with reflective prompts to help you deepen your thoughts or gain insight from what you’ve recorded. 💡 Use Cases Track personal growth, habits, or therapy progress Create voice memos while walking or commuting Maintain a structured journal without typing Use as a second brain to help you remember and revisit important thoughts 🔐 Privacy by Default Everything runs locally: No notes are uploaded to cloud platforms No audio is sent to third-party transcription services No LLM processing happens outside your device Ideal for privacy-minded users, psychologists, researchers, or digital minimalists who want AI assistance without surveillance. ⚙️ Technical Stack Raspberry Pi 5: Low-power edge device Whisper (local): For voice-to-text conversion Ollama + LLaMA 3.2: For summarization, classification, and retrieval Telegram Bot API: For input/output Custom Database (e.g., JSON/SQLite): For storing and querying notes 🧪 Real-Life Use This agent is actively used daily by the developer to log ideas, emotions, and plans. It has proven effective for lightweight journaling and context-aware memory assistance, even when offline.
by Alex Huang
Use case This workflow is designed for e-commerce brands and content teams who: Need to scale SEO content production without sacrificing quality Want to eliminate manual keyword filtering (saves 10+ hours/week) Aim to dominate niche search terms (e.g., "vegan leather crossbody bags") What this workflow does Automates the end-to-end process from keyword discovery to publish-ready articles: Keyword Harvesting: Pulls 1,000+ keywords/day from SEMrush/Ahrefs Smart Filtering:Blocks competitor brands (e.g., "Zara alternatives") Detects irrelevant demographics ("kids", "petite") AI Content Generation:Flags non-compliant colors (non-black/white terms) Multi-Channel Output: Formats content for blogs, product descriptions, and email campaigns setup Add Google,SEMrush and OpenAI credentials Set the rules excel of google drive Test workflow by testing workflow Review generated opportunity report in Google Sheets How to adjust this template Change scenario: Replace the rules and define different target
by kenandrewmiranda
An automated n8n workflow that analyzes stocks using RSI and MACD, summarizes insights with OpenAI, and sends a Slack-ready market update every hour. This workflow: Runs hourly from 6:30 AM to 2:30 PM PT, Mon–Fri Checks if the U.S. stock market is open using Alpaca’s /clock API Pulls daily stock bars for a list of tickers via Alpaca’s /v2/stocks/bars Calculates RSI and MACD using a Python code node Categorizes each stock as Buy / Hold / Sell Uses OpenAI Assistant to summarize the results in Slack markdown Sends the message to a specific Slack user or channel
by Liam
Ever wanted to just tell your 3d printer what to do remotely? This game changer let's you converse with OpenAI agents to manage OctoPrint connected 3d printers. Great for remote management and monitoring with easy to read responses. Query what jobs are available, Load a print job (upload workflow in development, will be released soon for community use), start the print job and request detailed reports on your printers performance as it runs. Chat > AI > OctoPrint API > Your 3d Printer ++Example 1:++ Human Input: "What print job is running?" AI Response: Current Print Job Details, Job Name: Stand v1.gcode File Size: 2,533,233 bytes Estimated Print Time: Approximately 6 hours 15 minutes (22536 seconds) Progress: Completion: 1.97%, Current Position in File: 49,940 bytes, Time Spent Printing: 2,611 seconds (~43.5 minutes), Time Remaining: Approximately 5 hours 34 minutes (20,123 seconds), Status, Current State: Printing." ++Example 2++ Human Input: "Is my 3d Printer ok?" AI Response: "Status: Printing, Operational: Yes, ++Temperature:++ Bed Temperature: Actual: 50°C, Target: 50°C, Nozzle (Tool 0) Temperature: Actual: 200°C, Target: 200°C, ++Connection Status:++ Connection State: Currently connected to the printer, Port: /dev/ttyUSB0, Baudrate: 115200, Summary, Your 3D printer is currently operational and printing without errors. The bed and nozzle temperatures are both at their target values."
by bswlife
Disclaimer The Execute Command node is only supported on self-hosted (local) instances of n8n. Introduction KOKORO TTS - Kokoro TTS is a compact yet powerful text-to-speech model, currently available on Hugging Face and GitHub. Despite its modest size—trained on less than 100 hours of audio—it delivers impressive results, consistently topping the TTS leaderboard on Hugging Face. Unlike larger systems, Kokoro TTS offers the advantage of running locally, even on devices without GPUs, making it accessible for a wide range of users. Who will benefit from this integration? This will be useful for video bloggers, TikTokers, and it will also enable the creation of a free voice chat bot. Currently, TTS models are mostly paid, but this integration will allow for fully free voice generation. The possibilities are limited only by your imagination. Note Unfortunately, we can't interact with the KOKORO API via browser URL (GET/POST), but we can run a Python script through n8n and pass any variables to it. In the tutorial, the D drive is used, but you can rewrite this for any paths, including the C drive. Step 1 You need to have Python installed. link Also, download and extract the portable version of KOKORO from GitHub. Create a file named voicegen.py with the following code in the KOKORO folder: (C:\KOKORO). As you can see, the output path is: (D:\output.mp3). import sys import shutil from gradio_client import Client Set UTF-8 encoding for stdout sys.stdout.reconfigure(encoding='utf-8') Get arguments from command line text = sys.argv[1] # First argument: input text voice = sys.argv[2] # Second argument: voice speed = float(sys.argv[3]) # Third argument: speed (converted to float) print(f"Received text: {text}") print(f"Voice: {voice}") print(f"Speed: {speed}") Connect to local Gradio server client = Client("http://localhost:7860/") Generate speech using the API result = client.predict( text=text, voice=voice, speed=speed, api_name="/generate_speech" ) Define output path output_path = r"D:\output.mp3" Move the generated file shutil.move(result[1], output_path) Print output path print(output_path) Step 2 Go to n8n and create the following workflow. Step 3 Edit Field Module. { "voice": "af_sarah", "text": "Hello world!" } Step 4 We’ll need an Execute Command module with the command: python C:\KOKORO\voicegen.py “{{ $json.text }}” “{{ $json.voice }}” 1 Step 5 The script is already working, but to listen to it, you can connect a Binary module with the path to the generated MP3 file D:/output.mp3 Step 6 Click “Text workflow” and enjoy the result. There are more voices and accents than in ChatGPT, plus it’s free. P.S. If you want, there is a detailed tutorial on my blog.
by jason
This workflow looks for a Close Date value using REGEX in the IF node. If it finds the correct value, it will pass that value on. If it does not find the correct value, it will generate a value based on the present time plus three weeks. The final result will show up in the NoOps node. You can text this execution by enabling and disabling the Set node when you run the execution.
by Udit Rawat
Workflow based on the following article. https://www.anthropic.com/news/contextual-retrieval This n8n automation is designed to extract, process, and store content from documents into a Pinecone vector store using context-based chunking. The workflow enhances retrieval accuracy in RAG (Retrieval-Augmented Generation) setups by ensuring each chunk retains meaningful context. Workflow Breakdown: 🔹 Google Drive - Retrieve Document: The automation starts by fetching a source document from Google Drive. This document contains structured content, with predefined boundary markers for easy segmentation. 🔹 Extract Text Content - Once retrieved, the document’s text is extracted for processing. Special section boundary markers are used to divide the text into logical sections. 🔹 Code Node - Create Context-Based Chunks: A custom code node processes the extracted text, identifying section boundaries and splitting the document into meaningful chunks. Each chunk is structured to retain its context within the entire document. 🔹 Loop Node - Process Each Chunk: The workflow loops through each chunk, ensuring they are processed individually while maintaining a connection to the overall document context. 🔹 Agent Node - Generate Context for Each Chunk: We use an Agent node powered by OpenAI’s GPT-4.0-mini via OpenRouter to generate contextual metadata for each chunk, ensuring better retrieval accuracy. 🔹 Prepend Context to Chunks & Create Embeddings - The generated context is prepended to the original chunk, creating context-rich embeddings that improve searchability. 🔹 Google Gemini - Text Embeddings: The processed text is passed through Google Gemini text-embedding-004, which converts the text into semantic vector representations. 🔹 Pinecone Vector Store - Store Embeddings: The final embeddings, along with the enriched chunk content and metadata, are stored in Pinecone, making them easily retrievable for RAG-based AI applications. Use Case: This automation enhances RAG retrieval by ensuring each chunk is contextually aware of the entire document, leading to more accurate AI responses. It’s perfect for applications that require semantic search, AI-powered knowledge management, or intelligent document retrieval. By implementing context-based chunking, this workflow ensures that LLMs retrieve the most relevant data, improving response quality and accuracy in AI-driven applications.
by Don Jayamaha Jr
This workflow powers the Binance Spot Market Quant AI Agent, acting as the Financial Market Analyst. It fuses real-time market structure data (price, volume, kline) with multiple timeframe technical indicators (15m, 1h, 4h, 1d) and returns a structured trading outlook—perfect for intraday and swing traders who want actionable analysis in Telegram. 🔗 Requires the following sub-workflows to function: • Binance SM 15min Indicators Tool • Binance SM 1hour Indicators Tool • Binance SM 4hour Indicators Tool • Binance SM 1day Indicators Tool • Binance SM Price/24hStats/Kline Tool ⚙️ How It Works Triggered via webhook (typically by the Quant AI Agent). Extracts user symbol + timeframe from input (e.g., "DOGE outlook today"). Calls all linked sub-workflows to retrieve indicators + live price data. Merges the data and formats a clean trading report using GPT-4o-mini. Returns HTML-formatted message suitable for Telegram delivery. 📥 Sample Input { "message": "SOLUSDT", "sessionId": "654321123" } ✅ Telegram Output Format 📊 SOLUSDT Market Snapshot 💰 Price: $156.75 📉 24h Stats: High $160.10 | Low $149.00 | Volume: 1.1M SOL 🧪 4h Indicators: • RSI: 58.2 (Neutral-Bullish) • MACD: Crossover Up • BB: Squeezing Near Upper Band • ADX: 25.7 (Rising Trend) 📈 Resistance: $163 📉 Support: $148 🔍 Use Cases | Scenario | Outcome | | ------------------------------- | --------------------------------------------------------- | | User asks for “BTC outlook” | Returns 1h + 4h + 1d indicators + live price + key levels | | Telegram bot prompt: “DOGE now” | Returns short-term 15m + 1h analysis snapshot | | Strategy trigger inside n8n | Enables other workflows to consume structured signal data | 🎥 Watch Tutorial: 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding or redistribution permitted. 🔗 For support: LinkedIn – Don Jayamaha
by The { AI } rtist
Este workflow es para trabajar con tratamiento de texto usando n8n y poder iniciarte en como funciona. How To, Paso a Paso: https://comunidad-n8n.com/tratamiento-de-textos/ Comunidad de telegram: https://t.me/comunidadn8n
by NanaB
Description This n8n workflow acts as your personal AI speechwriting coach, directly accessible through Telegram. It listens to your spoken or typed drafts, provides insightful feedback on clarity, engagement, structure, and content, and iteratively refines your message based on your updates. Once you're ready, it synthesizes a brand-new speech or talk incorporating all the improvements and your accumulated ideas. This tool streamlines the speechwriting process, offering on-demand AI assistance to help you craft impactful and well-structured presentations. How it Works Input via Telegram: You interact with the workflow by sending your speech drafts or talking points directly to a designated Telegram bot. AI Feedback: The workflow processes your input using AI models (OpenAI and/or Google Gemini) to analyze various aspects of your speech and provides constructive feedback via Telegram. Iterative Refinement: You can then send updated versions of your speech to the bot, receiving further feedback to guide your revisions. Speech Synthesis: When you send the command to "generate speech," the workflow compiles all your previous input and the AI's feedback to synthesize a new, improved speech or talk, which is then sent back to you via Telegram. New Speech Cycle: By sending the command "new speech," the workflow clears its memory, allowing you to start the process anew for a different topic. Set Up Steps (Takes Approximatly 5 Minutes) Step 1: Create a Telegram Bot and Obtain its ID Open the Telegram application and search for "BotFather". Start a chat with BotFather by clicking "Start" or sending the /start command. Create a new bot by sending the command /newbot. Follow BotFather's instructions to choose a name and username for your bot. Once your bot is created, BotFather will provide you with an API token. Keep this token secure as it's required to connect your n8n workflow to your bot. Step 2: Obtain an OpenAI API Key Go to the OpenAI website (https://platform.openai.com/) and sign up for an account if you don't already have one. Navigate to the API keys section (usually under your profile settings or a "Developers" tab). Click on "Create new secret key". Copy the generated API key and store it securely. You will need to provide this key to your n8n workflow to access OpenAI's language models. Step 3: Obtain a Google Gemini LLM API Key Go to the Google Cloud AI Platform or Google AI Studio website (the specific platform may vary depending on the current Google AI offerings; search for "Google AI API"). Sign up or log in with your Google account. Follow the instructions to enable the Gemini API and create an API key. This might involve creating a project if you haven't already. Copy the generated API key and store it securely. You can then configure your n8n workflow to utilize Google Gemini's language models as well. Customization Options This n8n workflow offers significant flexibility, below are a few options: Modify AI prompts to tailor feedback and generation for presentations, storytelling, interviews, sales pitches, academic talks, and creative writing. Switch the interface from Telegram to Slack, WhatsApp, or even a web interface by replacing the relevant n8n nodes. Integrate analysis for sentiment, keyword density, pacing (with voice input), and filler word detection by adjusting the workflow. Connect to external data sources to provide context to the AI for more targeted feedback and generation. This adaptability allows you to re use this workflow for a wide range of specific use cases and communication environments.
by David Roberts
This workflow shows how you can get your OpenAI assistant to call an n8n workflow as a tool. Since you can put almost any functionality in an n8n workflow, this means you can give your assistant access to almost any data source. Note that to use this template, you need to be on n8n version 1.19.4 or later.