by Solomon
This n8n template demonstrates how to obtain token usage from AI Agents and places the data into a spreadsheet that calculates the estimated cost of the execution. Obtaining the token usage from AI Agents is tricky, because it doesn't provide all the data from tool calls. This workflow taps into the workflow execution metadata to extract token usage information. Works well with OpenAI, Google and Anthropic. Other LLM providers might need small tweaks. How it works The AI Agent executes and then calls a subworkflow to calculate the token usage. The data is stored in Google Sheets The spreadsheet has formulas to calculate the estimated cost of the execution. How to use The AI Agent is used as an example. Feel free to replace this with other agents you have. Call the subworkflow AFTER all the other branches have finished executing. Requirements LLM account (OpenAI, Gemini...) for API usage. Google Drive and Sheets credentials n8n API key of your instance
by Jimleuk
This n8n template demonstrates how to use OpenAI's Responses API with existing LLM and AI Agent nodes. Though I would recommend just waiting for official support, if you're impatient and would like a round-about way to integrate OpenAI's responses API into your existing AI workflows then this template is sure to satisfy! This approach implements a simple API wrapper for the Responses API using n8n's builtin webhooks. When the base url is pointed to these webhooks using a custom OpenAI credential, it's possible to intercept the request and remap for compatibility. How it works An OpenAI subnode is attached to our agent but has a special custom credential where the base_url is changed to point at this template's webhooks. When executing a query, the agent's request is forwarded to our mini chat completion workflow. Here, we take the default request and remap the values to use with a HTTP node which is set to query the Responses API. Once a response is received, we'll need to remap the output for Langchain compatibility. This just means the LLM or Agent node can parse it and respond to the user. There are two response formats, one for streaming and one for non-streaming responses. How to use You must activate this workflow to be able to use the webhooks. Create the custom OpenAI credential as instructed. Go to your existing AI workflows and replace the LLM node with the custom OpenAI credential. You do not need to copy anything else over to the existing template. Requirements OpenAI account for Responses API Customising this workflow Feel free to experiment with other LLMs using this same technique! Keep up to date with the Responses API announcements and make modifications as required.
by Mary Newhauser
RAG over a PDF with Weaviate This workflow allows you to upload a PDF file and ask questions about it using the Question and Answer Chain and the Weaviate Vector Store nodes. Who it's for This workflow is the simplest possible implementation of RAG with Weaviate in n8n. It's intended to act as an extendable template for RAG over your own documents. Prerequisites An existing Weaviate cluster. You can view instructions for setting up a local cluster with Docker here or a Weaviate Cloud cluster here. API keys to generate embeddings and power chat models. We use OpenAI, but feel free to switch out the models as you like. Self-hosted n8n instance. See this video for how to get set up in just three minutes. How it works Part 1: Manually upload data In this example, we manually upload a 100+ page article from arXiv called "A Survey of Large Language Models". But you can replace this with your own more advanced data pipeline, if you wish. Part 2: Embed and load data into Weaviate collection Here, we generate embeddings for the full-text of the article and store them in Weaviate. Part 3: Perform RAG over PDF file with Weaviate In this part of the workflow, you can enter your query by running the Chat Node and get a RAG response grounded in context via the Question and Answer Chain node. How to run the workflow Go through the prerequisites, creating a Weaviate cluster (can be local or cloud), downloading self-hosted n8n, and adding your API keys and other credentials. Select the embedding and chat models you'd like to use. Upload a PDF file you want to ask questions about. Execute the rest of the workflow.
by Obsidi8n
This workflow converts any n8n workflow outputs into Markdown notes that are accessible in your Obsidian Vault through Google Drive synchronization. Setup Requirements Create a designated folder in Google Drive (Desktop). Create a symbolic link between this folder and a new target folder in your Obsidian Vault. Configure Google Drive n8n node settings. Send the output of any workflow to the trigger, and the notes will appear in your Vault folder. Optional Features You can use AI agents to: Write notes in your preferred format (e.g., Zettelkasten). Compose YAML front matter. Suggest tags. Use Cases Convert RSS feed items to notes. Create notes from YouTube video transcripts. Transform tasks in Slack messages into Obsidian tasks. (Requires setting up a corresponding workflow, e.g., RSS trigger, YouTube transcriber, or Slack bot.)
by n8n Team
This template quickly shows how to use RAG in n8n. Who is this for? This template is for everyone who wants to start giving knowledge to their Agents through RAG. Requirements Have a PDF with custom knowledge that you want to provide to your agent. Setup No setup required. Just hit Execute Workflow, upload your knowledge document and then start chatting. How to customize this to your needs Add custom instructions to your Agent by changing the prompts in it. Add a different way to load in knowledge to your vector store, e.g. by looking at some Google Drive files or loading knowledge from a table. Exchange the Simple Vector Store nodes with your own vector store tools ready for production. Add a more sophisticated way to rank files found in the vector store. For more information read our docs on RAG in n8n.
by Jason Krol
This is a simple webpage scraper that specifically grabs today's newest 4K Bluray Preorders as listed on the Blu-ray.com website. This is a scheduled workflow that can run every day and will post a formatted summary message of links to a Discord channel of your choice. Minimal setup required: Just create a webhook for the channel you want posted to in Discord and provide that in the final step. The timezone format step is set to East Coast (NYC) by default, feel free to change. No API keys or any special configuration needed (beyond your Discord webhook) Feel free to customize the formatting of the message that gets posted 👍 How it works: First format todays date to match the formatting used on the website Grab the HTML for the preorders page at www.blu-ray.com Filter only the hyperlinks for each Bluray on the page Then further filter only those with an html header matching today's date Format how you want the message to be sent to your Discord channel (in this case a simple list of Hyperlinks for each Title) Send to Discord! Disclaimer: This should be only for personal use.** The links go back to the blu-ray.com website, which is a good thing! Don't abuse this by slamming their site with some crazy level of automation frequency. Support the blu-ray.com website by using their affiliate links whenever you do want to preorder a title ;) This is one of my first shared templates, so it may not be super optimal or perfect but it works for my needs and hopefully you'll find some use out of it! Discord currently has a 2000 character limit on webhook messages. Some of the messages may get truncated as a result.
by andsync
Who is this template for? This template is for learners, researchers, students and professionals who want to quickly capture the essence of a YouTube video. Steps in the workflow: Gets the transcript from any YouTube video through Supadata. Process the result from Supadata to one text Process the text with AI (any LLM of your choice) Final result: Produces a summary accompanied with the most important lessons and interesting facts mentioned in the video. The workflow automatically creates a new Google Doc wiht this output, in a folder of your choice on your Google Drive. (If you want to convert the markdown text to real markup after the Google Doc is created: just select all text (Ctrl-A or CMD-A), Cut the text (Ctrl-X or CMD-X and then go to Edit > Paste from Markdown.) Setup Edit your Supadata credentials in the second node (you can start for free) Choose your favourite LLM for AI processing Edit your Google Drive credentials. How to adjust it to your needs If you want the outcome to be different, edit the Prompt in "Proces transcript to summary template". The file name is a combination of ‘transcript ‘ and the date and time. You can change this to whatever you need in the Google Drive node. Supadata offers more details and options (or even translation) when working with transcripts. Check the options here: https://supadata.ai/documentation/youtube/get-transcript
by Rodrigue Gbadou
How it works Behavioral analytics**: Real-time analysis of product usage and engagement signals Churn prediction**: Predictive model identifying at-risk customers 15 days before Smart upselling**: Personalized recommendations based on usage and profile Retention campaigns**: Automated retention campaigns with dynamic offers Set up steps Product analytics**: Connect Mixpanel, Amplitude or proprietary analytics Billing system**: Integrate Stripe, Chargebee, Recurly for billing data Customer data**: Synchronize your CRM with complete customer history Email/SMS platforms**: Configure SendGrid, Twilio for communications Pricing rules**: Define your pricing matrix and promotional offers ML pipeline**: Configure predictive model training Key Features 🔮 Churn prediction**: At-risk customer identification with 85% accuracy 💰 Smart upselling**: Personalized recommendations increasing ARPU by 35% ⚡ Proactive interventions**: Automated actions before customer churns 📊 Revenue optimization**: Price optimization based on willingness to pay 🎯 Dynamic segmentation**: Real-time customer groups updates 🔄 A/B testing**: Automated testing of retention strategies 📈 LTV maximization**: Customer lifetime value optimization 🛡️ Dunning management**: Automated payment failure handling
by Arlin Perez
AI Research Assistant via Telegram (GPT-4o mini + DeepSeek R1 + SerpAPI) 👥 Who’s it for This workflow is perfect for anyone who wants to receive AI-powered research summaries directly on Telegram. Ideal for people asking frequent product, tech, or decision-making questions and want up-to-date answers sourced from the web. 🤖 What it does Users send a question via Telegram. An AI agent (DeepSeek R1) reformulates and understands the intent, while a second agent (GPT-4o mini) performs live research using SerpAPI. The most relevant answers, including links and images, are delivered back via Telegram. ⚙️ How it works 📲 Telegram Trigger – Starts when a user sends a message to your Telegram bot. 🧠 DeepSeek R1 Agent – Understands, clarifies, or reformulates the user query. 🧠 Research AI Agent (GPT-4o mini + SerpAPI) – Searches the web and summarizes the best results. 📤 Send Telegram Message – Sends the response back to the same user. 📋 Requirements Telegram bot (via BotFather) with API token set in n8n credentials OpenAI account with API key and balance for GPT-4o mini SerpAPI account (100 free searches/month) with API key DeepSeek account with API key and balance 🛠️ How to set up Create your Telegram bot using BotFather and connect it using the Telegram Trigger node Set up DeepSeek credentials and add a Chat Model AI Agent node using DeepSeek R1 to reformulate the user’s question Set up OpenAI credentials and add a second ChatGPT AI Agent node using GPT-4o mini In the GPT-4o node, enable the SerpAPI Tool and add your SerpAPI API key Pass the reformulated question from DeepSeek to the GPT-4o agent for live search and summarization Format the response (text, links, optional images) Send the final reply to the user using the Telegram Send Message node Ensure your n8n instance is publicly accessible Test the workflow by sending a message to your Telegram bot ✅
by Jimleuk
This n8n template watches a Gmail inbox for support messages and creates an equivalent issue item in Linear. How it works A scheduled trigger fetches recent Gmail messages from the inbox which collects support requests. These support requests are filtered to ensure they are only processed once and their HTML body is converted to markdown for easier parsing. Each support request is then triaged via an AI Agent which adds appropriate labels, assesses priority and summarises a title and description of the original request. Finally, the AI generated values are used to create an issue in Linear to be actioned. How to use Ensure the messages fetched are solely support requests otherwise you'll need to classify messages before processing them. Specify the labels and priorities to use in the system prompt of the AI agent. Requirements Gmail for incoming support messages OpenAI for LLM Linear for issue management Customising this workflow Consider automating more steps after the issue is created such as attempting issue resolution or capacity planning.
by GiovanniSegar
Video walkthrough https://www.youtube.com/watch?v=OwIFK-r-NtQ Summary of agent This agent can write and rewrite its own rules, allowing you to mold its behavior. It receives rules from a database as system instructions and has tools to create, edit, or delete them. This is a great baseline for new agent builds. You can tell it things like "Next time, use present tense when talking about this subject" and it will use a tool to save this as a rule, then receive that instruction in all future iterations. How to start using it Option 1: With a Postgres database (e.g., Supabase) Supabase Schema: Create a table (e.g., agent_rules) with the following columns: id: bigint (Primary Key, auto-incrementing) created_at: timestamp with time zone (Default: now()) rule_text: text agent: text Workflow Updates: Update the Postgres credentials in the "Get rules from database," "Insert rule into database," and "Execute query on rule database" nodes. Update the agent value (currently 'TestAgent') in the "Get rules from database" and "Insert rule into database" nodes if you want a different agent name. Update the Anthropic API credentials. Option 2: With Google Sheets Google Sheet Setup: Create a Google Sheet with columns for rule_text and agent. Workflow Updates: Example Google Sheets nodes are included. You will need to: Connect your Google Sheets credentials. Select your Google Sheet (with rule_text and agent columns) in all relevant Google Sheets nodes. Replace the existing Postgres nodes ("Get rules from database", "Insert rule into database", "Execute query on rule database") with the configured Google Sheets nodes. Update the agent value (currently 'TestAgent') in the Google Sheets nodes if you want a different agent name. Update the Anthropic API credentials. Agent Instructions: Update the agent's system message and remove the database schema section as it is no longer relevant
by sayamol thiramonpaphakul
This workflow automatically checks the status of your websites using UptimeRobot API. If any site is down or unstable, it will: Generate a natural-language alert message using GPT-4o Push the message to a LINE group (with funny IT-style encouragement) Log all DOWN status entries into your Supabase database Wait 30 minutes before repeating 🔧 How It Works Schedule Trigger – Runs on a fixed interval (every few minutes). UptimeRobot Node – Fetches website monitor data. Code Node (Filter) – Filters only websites with status 8 (may be down) or 9 (down). IF Node – If any site is down, proceed. LangChain LLM Node – Formats alert with a humorous message using GPT-4o. Line Notify (HTTP Request) – Sends the alert to your LINE group. Loop Over Items – Loops through all monitors. Filter Down (Status = 9) – Selects only “fully down” sites. Supabase Node – Logs these into synlora_uptime_down table. Wait Node – Delays next alert by 30 minutes to avoid spamming. ⚙️ Setup Steps Required: 🔗 UptimeRobot API Key 📲 LINE Channel Access Token and Group ID 🧠 OpenAI Key (GPT-4o Mini) 🗃️ Supabase Project & Table Step-by-step: Go to UptimeRobot → Get API key and ensure monitors are set up. Create a Supabase table with fields: website, status, uptime_id. Create a LINE Messaging API bot, join it to your group, and get: Access Token Group ID (userId or groupId) Add your OpenAI API Key for GPT-4o Mini (or switch to your preferred LLM). Import the workflow JSON into n8n. Set credentials in all necessary nodes. Activate the workflow.