by Jimleuk
This n8n template demonstrates how easy it is to build an Outlook Calendar Assistant powered by an AI agent equipped with Tools. For teams using Outlook Calendar and Slack who need easier calendar management, this workflow can be a great first step to introducing powerful AI tools into your daily activities. How it works A Slack Trigger node is configured to catch "bot mentions" events in a designated channel. The message is parsed using the Edit fields node to extract only the required attributes of the event. An AI Agent equipped with Outlook Calendar Tools enables question and answer capability for the organisation's shared calendars and events. The AI agent's response is sent back to Slack as a reply to the user's query. How to use The workflow is triggered via @mention-ing the bot followed by the query. eg. "@bot how many meetings does Paul have to attend to this week?" To start listening to real mentions, you must activate the workflow and set it to production mode. You must use the production webhook URL for the event subscription. Some sample queries to try "What's included in the product team's sprint demo this week?" "Who's booked room 7 for this Thursday?" "When is Jim & Nik's sales meeting with Microsoft?" Requirements Slack for Chat and Trigger. To get connected to Slack, see the official n8n docs for Slack Credentials. Outlook for Agent Tools To get connected to Outlook, see the official n8n docs for Outlook Credentials. Customising this workflow Not using Slack? This template can be modified to work with Teams but requires a little more configuration. Agents can have any number of tools but an overloaded agent is prone to confusion! If this happens, try splitting into multiple agents serving separate needs.
by Yaron Been
Scrape Indeed Job Listings for Hiring Signals Using Bright Data and LLMs How the flow runs Fill the form with job position you're hunting for. Bright data's scraper will scrape Indeed based on your requirments. Workflow waits for the snapshot. Data returns as JSON. Jobs append to Google Sheets. Each row goes to an LLM to analyze if you're a good fit for the job (based on your prompts). The LLMswrites YES or NO next to each job opportunity, helping you find job posts that are relevant to you. What you need Google Sheets with our template. Bright Data dataset and API key. OpenAI key for GPT‑4o mini (or any other LLM). n8n with required nodes. Form fields To Fill Job Location** – city or region. Keyword** – role or skills. Country** – two‑letter code. Setup steps Copy the sheet template link. Import the JSON workflow. Add your credentials in nodes. Test the form manually. Add a schedule if desired. Bright Data filter example [ { "country": "US", "domain": "indeed.com", "keyword_search": "Growth Marketer", "location": "Miami", "date_posted": "Last 24 hours" } ] Tips -Choose Last 24 hours often. -Increase wait time for big snapshots. -Narrow keywords to save credits. **Need help? **Email me anytime: Yaron@nofluff.online YouTube: @YaronBeen LinkedIn: https://www.linkedin.com/in/yaronbeen/ Bright Data Docs: https://docs.brightdata.com/introduction
by SamirLiu
📝 What this workflow does Every morning at 8 a.m., this workflow fetches the latest AI-related articles from both GNews and NewsAPI. It merges up to 40 new articles daily, selects the 15 most relevant ones on AI technology and applications, and uses GPT-4.1 to generate concise summaries in accurate Traditional Chinese (while preserving essential English technical terms). Each summary also includes the article link for easy referral. The compiled digest is then posted to your designated Telegram account or group. 👥 Who is this for? AI enthusiasts, professionals, and anyone interested in artificial intelligence news Individuals and teams wanting a concise daily digest of AI developments in Traditional Chinese Telegram users who prefer automated information delivery 🎯 What problem does this workflow solve? With the rapid evolution of AI technology, it can be overwhelming to keep up with new developments. This workflow addresses information overload by automatically collecting, summarizing, and translating the most important AI news each morning — all delivered conveniently to your chosen Telegram channel or group. ⚙️ Setup 🔑 Add NewsAPI and GNews API Keys Register for accounts on NewsAPI.org and GNews to obtain your API keys. Input your NewsAPI key directly into the Fetch NewsAPI articles node. Input your GNews API key into the Fetch GNews articles node. 🤖 Set up your Telegram Bot Create a Telegram Bot via BotFather and copy the generated Bot Token. In n8n, create Telegram Bot credentials using this token. In the Send summary to Telegram node, enter the chat ID of your target user, group, or channel to receive the messages. 🧠 Configure OpenAI Credentials In n8n, create a new credential using your OpenAI API key. Assign this credential to the GPT-4.1 Model node (or equivalent OpenAI/AI nodes). After completing these steps, your workflow is fully configured to fetch, summarize, and deliver daily AI news to your selected Telegram chat automatically. 🛠️ How to customize this workflow 🔍 Change the topic:** Update the keywords in the NewsAPI and GNews nodes for other subjects (e.g., "blockchain", "quantum computing"). ⏰ Adjust delivery time:** Modify the scheduled trigger to your preferred hour. ✍️ Tweak summary style or language:** Refine the prompt in the AI summarizer node for different tones or translate into other languages as needed. 📦 Dependencies NewsAPI account GNews account Telegram Bot OpenAI API access (for GPT-4.1) or compatible AI model for Langchain agent
by Samir Saci
Tags: Accessibility, SEO, Blogging, Marketing, Automation, AI, Web Auditing Context Hey! I’m Samir, a Supply Chain Engineer and Data Scientist from Paris, and the founder of LogiGreen Consulting. In my personal blog, I share insights on how to use AI, automation, and data analytics to improve logistics, operations, and digital sustainability practices. > Have you heard about accessibility? In this workflow, I use n8n to improve the quality of alternative texts for images on my personal website. 📬 For business inquiries, you can connect with me on LinkedIn Who is this template for? This workflow is for: Bloggers* and *website owners* who want to *improve accessibility** SEO professionals** looking to boost page performance Web developers* and *product teams** automating web audits What does it do? This n8n workflow: 🔍 Downloads the HTML of a blog or web page 🖼️ Extracts all ` tags and their alt` attributes 📉 Detects missing or too-short alt texts 🤖 Sends those images to GPT-4o (with vision) to generate new alt descriptions 📄 Saves the results into a Google Sheet, updating the alt text when needed How it works Set a page URL using the Set node Download HTML content Extract image src and alt using a Code node Store results in a Google Sheet Filter images with altLength < 50 Send image URL to GPT-4o Update the Google Sheet with the newly generated newAlt text The AI alt texts are concise, descriptive, and accessibility-compliant. What do I need to get started? You’ll need: A Google Sheet to store the audit results An OpenAI account with GPT-4o access Follow the Guide! Follow the sticky notes in the workflow or check my tutorial to configure each node and start using AI to improve the accessibility of your website. 🎥 Watch My Tutorial Notes GPT-generated alt texts are limited to ~125–150 characters for best results Use this to comply with WCAG and improve Google indexing Easily adapt it to audit multiple domains or e-commerce catalogues This workflow was built using n8n version 1.85.4 Submitted: April 21, 2025
by Mario
Dynamically switch between LLMs for AI Agents using LangChain Code Purpose This example workflow demonstrates a way to connect multiple LLMs to a single AI Agent/LangChain Node and programmatically use one – or in this case loop through them. What it does This AI workflow takes in customer complaints and generates a response that is being validated before returned. If the answer was not satisfactory, the response will be generated again with a more capable model. How it works A LangChain Code Node allows multiple LLMs to be connected to a single Basic LLM Chain. On every call only one LLM is actually being connected to the Basic LLM Chain, which is determined by the index defined in a previous Node. The AI output is later validated by a Sentiment Analysis Node If the result was not satisfactory, it loops back to the beginning and executes the same query with the next available LLM The loop ends either when the result passed the requirements or when all LLMs have been used before. Setup Clone the workflow and select the belonging credentials. You'll need an OpenAI Account, alternatively you can swap the LLM nodes with ones from a different provider like Anthropic after the import. How to use Beware that the order of the used LLMs is determined by the order they have been added to the workflow, not by the position on the canvas. After cloning this workflow into your environment, open the chat and send this example message: > I really love waiting two weeks just to get a keyboard that doesn’t even work. Great job. Any chance I could actually use the thing I paid for sometime this month? Most likely you will see that the first validation fails, causing it to loop back to the generation node and try again with the next available LLM. Since AI responses are unpredictable, the results and number of tries will differ for each run. Disclaimer Please note, that this workflow can only run on self-hosted n8n instances, since it requires the LangChain Code Node.
by Don Jayamaha Jr
📉 Detect key candlestick reversal patterns and volume divergence on Tesla (TSLA) using GPT-4.1 and real-time OHLCV data. This AI agent evaluates 1-hour and 1-day candles and is an essential part of the Tesla Financial Market Data Analyst Tool. It identifies signals like Doji, Engulfing, Hammer, and volume anomalies to support trade entry and exit logic. ⚠️ Not a standalone template — must be triggered by the Tesla Financial Market Data Analyst Tool 🔐 Requires: Alpha Vantage Premium API Key OpenAI GPT-4.1 access 🔍 What This Agent Does Calls Alpha Vantage to fetch: 🕐 1-hour OHLCV data 📅 1-day OHLCV data GPT-4.1 evaluates: 📊 Candlestick patterns like Doji, Engulfing, Shooting Star 🔄 Volume divergence (price/volume inconsistency) Returns a structured JSON output like: { "summary": "Bearish signs detected on 1-day chart. A shooting star formed on high volume while RSI is elevated. Volume divergence seen on 1h chart as price rises but volume weakens.", "candlestickPatterns": { "1h": "None", "1d": "Shooting Star" }, "volumeDivergence": { "1h": "Bearish", "1d": "None" }, "ohlcv": { "1h": { "close": 174.1, "volume": 1430000, "high": 175.0, "low": 173.8 }, "1d": { "close": 188.3, "volume": 21234000, "high": 189.9, "low": 183.7 } } } 🛠️ Setup Instructions Import the Workflow Name it: Tesla_1hour_and_1day_Klines_Tool Install Dependencies ✅ Tesla Financial Market Data Analyst Tool (this is the trigger parent) Add Required Credentials Alpha Vantage Premium → via HTTP Query Auth OpenAI GPT-4.1 → via OpenAI credentials Verify Web Access This tool fetches data live from Alpha Vantage: /query?function=TIME_SERIES_INTRADAY&interval=60min /query?function=TIME_SERIES_DAILY Run via Execute Workflow Trigger This tool will activate only when called by the Financial Analyst Agent. Inputs: message (optional) sessionId (used for memory continuity) 🧠 Agent Architecture | Component | Description | | ----------------------- | --------------------------------------------------- | | Candlestick Data Hour | Fetches 60min TSLA candles via Alpha Vantage | | Candlestick Data Day | Fetches daily TSLA candles via Alpha Vantage | | OpenAI Chat Model | GPT-4.1 reasoning engine for pattern detection | | Simple Memory | Maintains short-term logic context | | Tesla Klines Agent | LangChain AI agent analyzing both candle and volume | 📌 Sticky Notes Overview 📘 Workflow Purpose 🧠 Short-Term Memory Notes 🔍 1h/1d Data Fetch Logic 📉 Candlestick Pattern Types Detected 📊 Volume Divergence Definitions 🤖 GPT-4.1 Prompt Configuration 🔐 Licensing & Support © 2025 Treasurium Capital Limited Company Logic, pattern reasoning, and prompt structure are proprietary IP. 🔗 Don Jayamaha – LinkedIn 🔗 n8n Creator Profile 🚀 Automate technical edge: detect TSLA candle reversals and volume anomalies with precision using GPT-4.1 and Alpha Vantage. Required by the Tesla Financial Market Data Analyst Tool.
by Don Jayamaha Jr
📰 This AI-powered agent performs real-time sentiment analysis on Tesla (TSLA) news to support trading decisions. It aggregates headlines from 5 trusted sources and uses DeepSeek Chat to classify sentiment and generate structured summaries. This tool is a critical sub-agent in the broader Tesla Quant Trading AI Agent system. ⚠️ Not standalone — this agent is designed to be executed by the Tesla Quant Trading AI Agent. ⚙️ Requires: DeepSeek Chat API Key 🔌 Workflow Role This tool processes Tesla-related news and produces output like: { "sentiment": "bullish", "summary": "Tesla stock rallied today after strong delivery numbers and Cybertruck updates. Analysts remain optimistic.", "topHeadlines": [ "Tesla beats Q2 delivery forecast – Yahoo Finance", "Cybertruck ramps up in Texas – Electrek", "Berlin Gigafactory expands battery production – CleanTechnica" ] } Its output feeds directly into the master trading agent’s final trade report. 📰 News Sources Used This agent collects real-time headlines from: Google News (filtered by “Tesla” or “TSLA”) Yahoo Finance (TSLA-specific feed) Electrek (Tesla archive) CleanTechnica (Tesla sustainability news) TeslaNorth (app/product release updates) These five tools are always queried together to ensure market-wide signal coverage. 🤖 What the Agent Does Pulls headlines from all 5 Tesla-specific RSS feeds Uses DeepSeek Chat to: Analyze narrative tone (bullish / bearish / neutral) Identify macro/financial drivers Generate a 2–3 sentence summary Return top 3–5 headlines Outputs structured JSON for downstream use 🛠️ Setup Instructions 1. Install & Name Import this file and name it: Tesla_News_and_Sentiment_Analyst_Tool 2. Add DeepSeek API Credentials Go to: Credentials → Add New → DeepSeek API Save as: DeepSeek account 3. Internet Access Required Ensure RSS feeds can fetch live headlines Works best with a cloud-hosted n8n instance or tunnel-enabled local install 4. Must Be Triggered by Parent Triggered via Execute Workflow by the Tesla Quant Trading AI Agent Requires these inputs: message: optional query context sessionId: passed to maintain short-term memory across executions 🧠 Agent Architecture | Node Name | Function | | ---------------------------------- | ------------------------------------------------ | | DeepSeek Chat Model | Performs AI-based sentiment analysis | | Tesla News and Sentiment Analyst | Combines results, formats output in strict JSON | | Simple Memory | Stores session-level context (short-term memory) | | 5x RSS nodes | Aggregate Tesla news from trusted media outlets | 📌 Sticky Notes Included 🟢 Trigger from Parent Workflow – Executed only by main TSLA agent 🟠 News Feeds Overview – Lists and explains each of the 5 feeds 🧠 DeepSeek Chat Notes – Describes LLM behavior and parsing role 🔵 Short-Term Memory – Buffers sentiment context during user session 📘 Sentiment Analyst Agent – Summarizes key responsibilities 📎 Licensing & Attribution © 2025 Treasurium Capital Limited Company This architecture, workflow structure, and prompt design are licensed for educational and operational use only. Commercial resale or rebranding prohibited without authorization. 🔗 Creator: Don Jayamaha 🔗 Templates: https://n8n.io/creators/don-the-gem-dealer/ 🚀 Power your TSLA trading with AI-driven sentiment—built with DeepSeek Chat and 5 trusted news sources. This tool is required by the Tesla Quant Trading AI Agent.
by David Roberts
AI evaluation in n8n This is a template for n8n's evaluation feature. Evaluation is a technique for getting confidence that your AI workflow performs reliably, by running a test dataset containing different inputs through the workflow. By calculating a metric (score) for each input, you can see where the workflow is performing well and where it isn't. How it works This template shows how to calculate a workflow evaluation metric: whether a specific tool was called by an agent. We use an evaluation trigger to read in our dataset It is wired up in parallel with the regular trigger so that the workflow can be started from either one. More info We make sure that the agent outputs the list of tools that it used We then check whether the expected tool (from the dataset) is in that list Finally we pass this information back to n8n as a metric
by David Roberts
AI evaluation in n8n This is a template for n8n's evaluation feature. Evaluation is a technique for getting confidence that your AI workflow performs reliably, by running a test dataset containing different inputs through the workflow. By calculating a metric (score) for each input, you can see where the workflow is performing well and where it isn't. How it works This template shows how to calculate a workflow evaluation metric: whether an output matches an expected output (i.e. has the same meaning). The workflow takes questions about the causes of historical events and compares them with the reference answers in the dataset. We use an evaluation trigger to read in our dataset It is wired up in parallel with the regular chat trigger so that the workflow can be started from either one. More info If we're evaluating (i.e. the execution started from the evaluation trigger), we calculate the correctness metric using AI We pass this information back to n8n as a metric If we're not evaluating we avoid calculating the metric, to reduce cost
by Amit Mehta
How it works: This workflow automates the entire LinkedIn content distribution process — from AI-powered post creation to auto-posting on both personal LinkedIn profiles and LinkedIn groups, using GPT-4o and Google Sheets as the content source and control panel. Auto-generates professional LinkedIn posts from spreadsheet topics using GPT-4o. Posts to your LinkedIn profile and multiple groups. Updates status to avoid duplicate posting. Fully customizable and reusable with your spreadsheet. Set up Steps Create and Upload the Spreadsheet Name it: Linkedin Post Sheet1 (for post topics): Columns: ID | Linkedin Post Title | Status Add post titles under Linkedin Post Title Set Status to Pending Create new sheet name as "Groups" (for group distribution): Column: GroupIds Add LinkedIn Group IDs, one per row Connect Google Sheets Nodes Connect your Google account to these nodes: Linkedin Post topic (Reads post topics) Get group id (Reads LinkedIn groups) Update Status (Writes back the status after posting) Configure GPT-4o (OpenAI) Add your OpenAI API key in the Linkedin Post creator node This node will generate high-quality content from your topic titles Connect LinkedIn Account Add your LinkedIn credentials in the Linkedin user detail node Ensure appropriate permissions to post on profile and groups Activate the Workflow : Once live, the workflow will: Monitor the Google Sheet for Pending posts. Generate content via GPT-4o. Post to: Your LinkedIn Profile Each LinkedIn Group listed in the Groups sheet Update the post Status to Posted Customization Tips Want to personalize this template? Change AI tone or style in the OpenAI node prompt Add a scheduler node if you'd like to post at fixed intervals Use a Slack or Telegram approval step before posting Integrate analytics tools to track post performance Suggested Sticky Notes for Workflow | Node or Section | Sticky Note Content | | ---------------------- | --------------------------------------------------------------------------- | | Linkedin Post topic | Reads the topic titles and statuses from Sheet1 | | OpenAI (GPT-4o) | Generates content using topic title — you can modify the tone/prompt here | | Linkedin user detail | Your personal LinkedIn credentials — required to post | | Group loop | Iterates through LinkedIn Group IDs and posts the content | | Update Status | Updates spreadsheet so the topic isn't re-posted |
by Davi Saranszky Mesquita
Make OpenAI Citation for File Retrieval RAG Use case In this example, we will ensure that all texts from the OpenAI assistant search for citations and sources in the vector store files. We can also format the output for Markdown or HTML tags. This is necessary because the assistant sometimes generates strange characters, and we can also use dynamic references such as citations 1, 2, 3, for example. What this workflow does In this workflow, we will use an OpenAI assistant created within their interface, equipped with a vector store containing some files for file retrieval. The assistant will perform the file search within the OpenAI infrastructure and will return the content with citations. We will make an HTTP request to retrieve all the details we need to format the text output. Setup Insert an OpenAI Key How to adjust it to your needs At the end of the workflow, we have a block of code that will format the output, and there we can add Markdown tags to create links. Optionally, we can transform the Markdown formatting into HTML.
by Jimleuk
This n8n template demonstrates how to calculate the evaluation metric "Relevance" which in this scenario, measures the relevance of the agent's response to the user's question. The scoring approach is adapted from the open-source evaluations project RAGAS and you can see the source here https://github.com/explodinggradients/ragas/blob/main/ragas/src/ragas/metrics/_answer_relevance.py How it works This evaluation works best for Q&A agents. For our scoring, we analyse the agent's response and ask another AI to generate a question from it. This generated question is then compared to the original question using cosine similarity. A high score indicates relevance and the agent's successful ability to answer the question whereas a low score means agent may have added too much irrelevant info, went off script or hallucinated. Requirements n8n version 1.94+ Check out this Google Sheet for a sample data https://docs.google.com/spreadsheets/d/1YOnu2JJjlxd787AuYcg-wKbkjyjyZFgASYVV0jsij5Y/edit?usp=sharing