by Mihai Farcas
This workflow demonstrates a Retrieval Augmented Generation (RAG) chatbot that lets you chat with the GitHub API Specification (documentation) using natural language. Built with n8n, OpenAI's LLMs and the Pinecone vector database, it provides accurate and context-aware responses to your questions about how to use the GitHub API. You could adapt this to any OpenAPI specification for any public or private API, thus creating a documentation chatbout that anyone in your company can use. How it works: Data Ingestion: The workflow fetches the complete GitHub API OpenAPI 3 specification directly from the GitHub repository. Chunking and Embeddings: It splits the large API spec into smaller, manageable chunks. OpenAI's embedding models then generate vector embeddings for each chunk, capturing their semantic meaning. Vector Database Storage: These embeddings, along with the corresponding text chunks, are stored in a Pinecone vector database. Chat Interface and Query Processing: The workflow provides a simple chat interface. When you ask a question, it generates an embedding for your query using the same OpenAI model. Semantic Search and Retrieval: Pinecone is queried to find the most relevant text chunks from the API spec based on the query embedding. Response Generation: The retrieved chunks and your original question are fed to OpenAI's gpt-4o-mini LLM, which generates a concise, informative, and contextually relevant answer, including code snippets when applicable. Set up steps: Create accounts: You'll need accounts with OpenAI and Pinecone. API keys: Obtain API keys for both services. Configure credentials: In your n8n environment, configure credentials for OpenAI and Pinecone using your API keys. Import the workflow: Import this workflow into your n8n instance. Pinecone Index: Ensure you have a Pinecone index named "n8n-demo" or adjust the workflow accordingly. The workflow is set up to work with this index out of the box. Setup Time: Approximately 15-20 minutes. Why use this workflow? Learn RAG in Action: This is a practical, hands-on example of how to build a RAG-powered chatbot. Adaptable Template: Easily modify this workflow to create chatbots for other APIs or knowledge bases. n8n Made Easy: See how n8n simplifies complex integrations between data sources, vector databases, and LLMs.
by Teddy
Webhook | Paper Summarization Who is this for? This workflow is designed for researchers, students, and professionals who frequently read academic papers and need concise summaries. It is useful for anyone who wants to quickly extract key information from research papers hosted on arXiv. What problem is this workflow solving? Academic papers are often lengthy and complex, making it time-consuming to extract essential insights. This workflow automates the process of retrieving, processing, and summarizing research papers, allowing users to focus on key findings without manually reading the entire paper. What this workflow does This workflow extracts the content of an arXiv research paper, processes its abstract and main sections, and generates a structured summary. It provides a well-organized output containing the Abstract Overview, Introduction, Results, and Conclusion, ensuring that users receive critical information in a concise format. Setup Ensure you have n8n installed and configured. Import this workflow into your n8n instance. Configure an external trigger using the Webhook node to accept paper IDs. Test the workflow by providing an arXiv paper ID. (Optional) Modify the summarization model or output format according to your preferences. How to customize this workflow to your needs Adjust the HTTPRequest node to fetch papers from other sources beyond arXiv. Modify the Summarization Chain node to refine the summary output. Enhance the Reorganize Paper Summary step by integrating additional language models. Add an email or Slack notification step to receive summaries directly. Workflow Steps Webhook receives a request with an arXiv paper ID. Send an HTTP request using "Request to Paper Page" to fetch the HTML content of the paper. Extract the abstract and sections using "Extract Contents". Split out all sections using "Split out All Sections" to process individual paragraphs. Clean up text using "Remove useless links" to remove unnecessary elements. Summarize extracted content using "Summarization Chain". Aggregate summarized content using "Aggregate summarized content". Reorganize the paper summary into structured sections using "Reorganize Paper Summary". Extract key information using "Content Extractor" to classify data into Abstract Overview, Introduction, Results, and Conclusion. Respond to the webhook with the structured summary. Note: This workflow is designed for use with arXiv research papers but can be adapted to process papers from other sources.
by Joseph LePage
Who is this for? This workflow template is designed for AI enthusiasts, developers, and privacy-conscious users who want to leverage the power of local large language models (LLMs) without sending data to external services. It's particularly valuable for those running Ollama locally who want intelligent routing between different specialized models. What problem is this workflow solving? When working with multiple local LLMs, each with different strengths and capabilities, it can be challenging to manually select the right model for each specific task. This workflow automatically analyzes user prompts and routes them to the most appropriate specialized Ollama model, ensuring optimal performance without requiring technical knowledge from the end user. What this workflow does This intelligent router: Analyzes incoming user prompts to determine the nature of the request Automatically selects the optimal Ollama model from your local collection based on task requirements Routes requests between specialized models for different tasks: Text-only models (qwq, llama3.2, phi4) for various reasoning and conversation tasks Code-specific models (qwen2.5-coder) for programming assistance Vision-capable models (granite3.2-vision, llama3.2-vision) for image analysis Maintains conversation memory for consistent interactions Processes everything locally for complete privacy and data security Setup Ensure you have Ollama installed and running locally Pull the required models mentioned in the workflow using Ollama CLI (e.g., ollama pull phi4) Configure the Ollama API credentials in n8n (default: http://127.0.0.1:11434) Activate the workflow and start interacting through the chat interface How to customize this workflow to your needs Add or remove models from the router's decision framework based on your specific Ollama collection Adjust the system prompts in the LLM Router to prioritize different model selection criteria Modify the decision tree logic to better suit your specific use cases Add additional preprocessing steps for specialized inputs This workflow demonstrates how n8n can be used to create sophisticated AI orchestration systems that respect user privacy by keeping everything local while still providing intelligent model selection capabilities.
by Onur
Description This workflow empowers you to effortlessly get answers to your n8n platform questions through an AI-powered assistant. Simply send your query, and the assistant will search documentation, forum posts, and example workflows to provide comprehensive, accurate responses tailored to your specific needs. > Note: This workflow uses community nodes (n8n-nodes-mcp.mcpClientTool) and will only work on self-hosted n8n instances. You'll need to install the required community nodes before importing this workflow. ! What does this workflow do? This workflow streamlines the information retrieval process by automatically researching n8n platform documentation, community forums, and example workflows, providing you with relevant answers to your questions. Who is this for? New n8n Users**: Quickly get answers to basic platform questions and learn how to use n8n effectively Experienced Developers**: Find solutions to specific technical issues or discover advanced workflows Teams**: Boost productivity by automating the research process for n8n platform questions Anyone** looking to leverage AI for efficient and accurate n8n platform knowledge retrieval Benefits Effortless Research**: Automate the research process across n8n documentation, forum posts, and example workflows AI-Powered Intelligence**: Leverage the power of LLMs to understand context and generate helpful responses Increased Efficiency**: Save time and resources by automating the research process Quick Solutions**: Get immediate answers to your n8n platform questions Enhanced Learning**: Discover new workflows, features, and best practices to improve your n8n experience How It Works Receive Request: The workflow starts when a chat message is received containing your n8n-related question AI Processing: The AI agent powered by OpenAI GPT-4o analyzes your question Research and Information Gathering: The system searches across multiple sources: Official n8n documentation for general knowledge and how-to guides Community forums for bug reports and specific issues Example workflow repository for relevant implementations Response Generation: The AI agent compiles the research and generates a clear, comprehensive answer Output: The workflow provides you with the relevant information and step-by-step guidance when applicable n8n Nodes Used When chat message received (Chat Trigger) OpenAI Chat Model (GPT-4o mini) N8N AI Agent n8n-assistant tools (MCP Client Tool - Community Node) n8n-assistant execute (MCP Client Tool - Community Node) Prerequisites Self-hosted n8n instance OpenAI API credentials MCP client community node installed MCP server configured to search n8n resources Setup Import the workflow JSON into your n8n instance Configure the OpenAI credentials Configure your MCP client API credentials In the n8n-assistant execute node, ensure the parameter is set to "specific" (corrected from "spesific") Test the workflow by sending a message with an n8n-related question MCP Server Connection To connect to the MCP server that powers this assistant's research capabilities, you need to use the following URL: https://smithery.ai/server/@onurpolat05/n8n-assistant This MCP server is specifically designed to search across three types of n8n resources: Official documentation for general platform information and workflow creation guidance Community forums for bug-related issues and troubleshooting Example workflow repositories for reference implementations Configure this URL in your MCP client credentials to enable the assistant to retrieve relevant information based on user queries. This workflow combines the convenience of chat with the power of AI to provide a seamless n8n platform research experience. Start getting instant answers to your n8n questions today!
by Don Jayamaha Jr
Analyze exchange data, market indexes, and community sentiment from CoinMarketCap—powered by AI. This sub-agent provides access to exchange listings, token holdings, metadata, and high-level metrics like the CMC 100 Index and the Fear & Greed Index. It’s designed for use within your larger CoinMarketCap AI Analyst system or as a standalone workflow. This agent can be triggered by a supervisor or manually used with message and sessionId inputs. Supported Tools (5 Total) 🔍 Exchange Map Get CoinMarketCap IDs, names, and slugs for exchanges (used as lookup before deeper queries). 🧾 Exchange Info Metadata including launch date, social links, country, and operational status. 💰 Exchange Assets Token balances, wallet addresses, and total USD value held by a specific exchange. 📈 CoinMarketCap 100 Index Constituents and weights of the CMC 100 Index, updated live. 😱 Fear & Greed Index Market sentiment score updated daily, ranging from Extreme Fear to Extreme Greed. What You Can Do with This Agent 🔹 Map exchanges to retrieve their ID and slug 🔹 Analyze exchange holdings by token and blockchain 🔹 Pull metadata for major CEXs like Binance or Coinbase 🔹 Compare global sentiment using the Fear & Greed Index 🔹 Access index data to understand CMC’s top 100 crypto asset breakdown Example Queries You Can Use ✅ "What is the latest Fear and Greed Index reading?" ✅ "Get a list of all exchanges on CoinMarketCap." ✅ "What tokens are held by Binance?" ✅ "Retrieve metadata for Coinbase." ✅ "Show me the top assets in the CMC 100 Index." Agent Architecture AI Brain**: GPT-4o-mini Memory**: Window buffer memory using sessionId Tools**: 5 API-connected nodes Trigger**: External input via message and sessionId Setup Instructions Get a CoinMarketCap API Key Apply here: https://coinmarketcap.com/api/ Configure n8n Credentials Use HTTP Header Auth to store your CoinMarketCap API key. Optional: Trigger from a Supervisor Connect to a parent agent using Execute Workflow with message and sessionId inputs. Test Sample Prompts “Get all exchanges”, “Fetch CMC index”, “Show Binance token holdings” Sticky Notes Included Exchange & Community Guide – Explains agent purpose and component connections Usage & Examples – Walkthrough for sample use cases Error Handling & Licensing – Includes API error code reference and licensing details ✅ Final Notes This agent is part of a broader CoinMarketCap AI Analyst System. Visit my Creator profile to download all available sub-agents and supervisor flows. Understand exchange behavior and community sentiment—automated with AI and CoinMarketCap.
by Don Jayamaha Jr
Meet your AI-powered crypto data analyst—fully integrated with CoinMarketCap APIs. This workflow acts as the supervisor agent for a multi-agent architecture built in n8n, connecting three powerful sub-agents to extract real-time insights from centralized and decentralized markets. It’s the ultimate tool for crypto traders, analysts, developers, and researchers who need strategic multi-source intelligence—all through Telegram. This workflow requires 3 sub-agent templates to function correctly. See below. 🔌 Required Sub-Workflows (Install First) CoinMarketCap Crypto Agent Tool → Token prices, metadata, conversions, listings CoinMarketCap Exchange & Community Agent Tool → Exchange info, token holdings, Fear & Greed index CoinMarketCap DEXScan Agent Tool → DEX trading pairs, liquidity, OHLCV data Download all from my Creator Profile: https://n8n.io/creators/don-the-gem-dealer/ What Makes This Workflow Special? This is not just another API wrapper—it’s an intelligent routing agent powered by GPT-4o-mini, capable of: Understanding complex user queries Choosing the appropriate tool workflow Structuring the API request Executing sub-workflows Formatting the output Returning insights via Telegram It connects three domains of market data: Cryptocurrencies (CEX)** Exchanges & Sentiment** DEX trading data** 🔍 What You Can Do 💰 Token Intelligence Get token metadata, price, volume, supply Compare rankings and conversions 🏦 Exchange Insights View assets held by exchanges Track the CMC 100 Index and Fear & Greed Score 🌐 DEX Market Analysis Analyze pair quotes, historical OHLCV, live trades Discover the top DEXs by volume across blockchains ✅ Example Questions to Ask “What’s the market cap of Ethereum today?” “Show liquidity and volume for SOL/USDT on Solana” “Get token holdings for Binance” “Compare BTC price on Uniswap vs Binance” “What’s the Fear & Greed index right now?” 🛠️ Setup Instructions Create Telegram Bot Use @BotFather to get your bot token. Get CoinMarketCap API Key Apply here: https://coinmarketcap.com/api/ Install Sub-Agent Templates Required: Crypto Agent Tool Exchange & Community Tool DEXScan Tool Configure Credentials in n8n Add both Telegram and CoinMarketCap keys as HTTP Header Auth. Deploy & Test Ask your Telegram bot: “Top 10 tokens by 24h volume” or “Convert 5 ETH to USD” Workflow Architecture AI Brain**: GPT-4o-mini Memory**: Windowed buffer memory via sessionId Tool Agents**: toolWorkflow() → routes requests to the appropriate sub-agent Executes real-time API queries and returns structured output Included Sticky Notes System Overview** Error Handling Guide (200, 400, 401, 429, 500)** Step-by-Step Usage Instructions** Prompt Examples + API Docs** Legal & Licensing Notes** Your crypto insights—smarter, faster, and all in one Telegram message.
by Don Jayamaha Jr
Access real-time cryptocurrency prices, market rankings, metadata, and global stats—powered by GPT-4o and CoinMarketCap! This modular AI-powered agent is part of a broader CoinMarketCap multi-agent system designed for crypto analysts, traders, and developers. It uses the CoinMarketCap API and intelligently routes queries to the correct tool using AI. This agent can be used standalone or triggered by a supervisor AI agent for multi-agent orchestration. Supported API Tools (6 Total) This agent intelligently selects from the following tools to answer your crypto-related questions: 🔍 Tool Summary Crypto Map – Lookup CoinMarketCap IDs and active coins Crypto Info – Get metadata, whitepapers, and social links Crypto Listings – Ranked coins by market cap CoinMarketCap Price – Live prices, volume, and supply Global Metrics – Total market cap, BTC dominance Price Conversion – Convert between crypto and fiat What You Can Do with This Agent 🔹 Get live prices and volume for tokens (e.g., BTC, ETH, SOL) 🔹 Convert crypto → fiat or fiat → crypto instantly 🔹 Retrieve whitepapers, logos, and website links for any token 🔹 Analyze total market cap, BTC dominance, and circulating supply 🔹 Discover new tokens and track their CoinMarketCap IDs 🔹 View the top 100 coins ranked by market cap or volume Example Queries ✅ "What is the CoinMarketCap ID for PEPE?" ✅ "Show me the top 10 cryptocurrencies by market cap." ✅ "Convert 5 ETH to USD." ✅ "What’s the 24h volume for ADA?" ✅ "Get the global market cap and BTC dominance." AI Architecture AI Brain**: GPT-4o-mini Memory**: Session buffer with sessionId Agent Type**: Subworkflow AI tool Connected APIs**: 6 CoinMarketCap endpoints Trigger Mode**: Executes when called by a supervisor (via message and sessionId inputs) Setup Instructions Get a CoinMarketCap API Key Register here: https://coinmarketcap.com/api/ Configure Credentials in n8n Use HTTP Header Auth with your API key for each connected endpoint Connect This Agent to a Supervisor Workflow (Optional) Trigger this agent using Execute Workflow with inputs message and sessionId Test Prompts Try asking: “Convert 1000 DOGE to BTC” or “Top 5 coins in EUR” Included Sticky Notes Crypto Agent Guide – Agent overview, node map, and endpoint details Usage Instructions – Step-by-step usage and sample prompts Error Handling & Licensing – Troubleshooting and IP rights ✅ Final Notes This agent is part of the CoinMarketCap AI Analyst System, which includes multiple specialized agents for cryptocurrencies, exchanges, community data, and DEX insights. Visit my Creator profile to find the full suite of tools. Get smarter about crypto—analyze the market in real time with AI and CoinMarketCap.
by Don Jayamaha Jr
Gain full visibility into decentralized exchanges using CoinMarketCap’s DEXScan API—powered by AI. This workflow is part of the CoinMarketCap AI Analyst system and delivers real-time and historical insights on spot trading pairs, DEX liquidity, trading activity, and OHLCV data across chains like Ethereum, Polygon, Solana, and more. Use this workflow as a sub-agent triggered by a parent supervisor workflow, or run it manually with inputs sessionId and message. 🔧 Supported Tools (8 Total) DEX Metadata → Static info (name, launch date, logo, URLs) DEX Networks List → All supported DEX chains + network metadata DEX Listings Quotes → Ranked list of DEXs with live trading volume, market share DEX Pair Quotes (Latest) → Real-time liquidity, price, and buy/sell stats DEX OHLCV Historical → Time-series data (daily/hourly/1m) DEX OHLCV Latest → Today’s price, volume, open/close for pairs DEX Trades Latest → Up to 100 recent trades for any DEX pair DEX Spot Pairs Latest → Active token pairs across DEXs + filters (volume, liquidity, volatility) Agent Architecture AI Model**: gpt-4o-mini Context Memory**: Window buffer using sessionId Trigger Input**: message, sessionId Execution**: Via Execute Workflow or parent AI supervisor Design**: Tool-based LangChain agent with CMC DEXScan endpoints 💡 Use Cases 🔹 Find top DEXs by 24h volume 🔹 Get spot pairs with highest liquidity on a specific network 🔹 Track historical OHLCV for Uniswap pairs 🔹 View latest trades for SOL/USDC pool 🔹 Analyze tax, pooled % and holders for specific pairs 🔹 Filter pairs by 24h volume, percent change, liquidity, or number of transactions ✅ Example Queries ✅ "Top 5 DEXs by 24h volume on Ethereum" ✅ "Get historical OHLCV for SOL-USDC on Solana" ✅ "Latest trades for a PancakeSwap pair" ✅ "Show all spot pairs with over $500K in liquidity on Polygon" ✅ "Retrieve metadata for Uniswap and SushiSwap" 🛠️ Setup Instructions Get a CoinMarketCap API Key Sign up at: https://coinmarketcap.com/api/ Add API Key to Credentials in n8n Use HTTP Header Auth method Trigger from Parent Workflow (Optional) Use Execute Workflow and pass message and sessionId Test Prompt Ideas Try: "Compare liquidity of Uniswap and Curve pairs on Ethereum" Sticky Notes Included DEXScan Agent Guide – Workflow architecture + supported tools Usage & API Call Examples – Prompts, test inputs, setup flow Error Codes + Licensing – 400/401/429/500 troubleshooting, IP rights ✅ Final Notes This agent is part of the CoinMarketCap AI Analyst System, which includes multiple specialized agents for cryptocurrencies, exchanges, and community data. Visit my Creator profile to find the full suite of tools. Master DEX analytics with AI—get powerful liquidity, trading, and pair insights in seconds.
by SamirLiu
📝 What this workflow does Every morning at 8 a.m., this workflow fetches the latest AI-related articles from both GNews and NewsAPI. It merges up to 40 new articles daily, selects the 15 most relevant ones on AI technology and applications, and uses GPT-4.1 to generate concise summaries in accurate Traditional Chinese (while preserving essential English technical terms). Each summary also includes the article link for easy referral. The compiled digest is then posted to your designated Telegram account or group. 👥 Who is this for? AI enthusiasts, professionals, and anyone interested in artificial intelligence news Individuals and teams wanting a concise daily digest of AI developments in Traditional Chinese Telegram users who prefer automated information delivery 🎯 What problem does this workflow solve? With the rapid evolution of AI technology, it can be overwhelming to keep up with new developments. This workflow addresses information overload by automatically collecting, summarizing, and translating the most important AI news each morning — all delivered conveniently to your chosen Telegram channel or group. ⚙️ Setup 🔑 Add NewsAPI and GNews API Keys Register for accounts on NewsAPI.org and GNews to obtain your API keys. Input your NewsAPI key directly into the Fetch NewsAPI articles node. Input your GNews API key into the Fetch GNews articles node. 🤖 Set up your Telegram Bot Create a Telegram Bot via BotFather and copy the generated Bot Token. In n8n, create Telegram Bot credentials using this token. In the Send summary to Telegram node, enter the chat ID of your target user, group, or channel to receive the messages. 🧠 Configure OpenAI Credentials In n8n, create a new credential using your OpenAI API key. Assign this credential to the GPT-4.1 Model node (or equivalent OpenAI/AI nodes). After completing these steps, your workflow is fully configured to fetch, summarize, and deliver daily AI news to your selected Telegram chat automatically. 🛠️ How to customize this workflow 🔍 Change the topic:** Update the keywords in the NewsAPI and GNews nodes for other subjects (e.g., "blockchain", "quantum computing"). ⏰ Adjust delivery time:** Modify the scheduled trigger to your preferred hour. ✍️ Tweak summary style or language:** Refine the prompt in the AI summarizer node for different tones or translate into other languages as needed. 📦 Dependencies NewsAPI account GNews account Telegram Bot OpenAI API access (for GPT-4.1) or compatible AI model for Langchain agent
by Samir Saci
Tags: Accessibility, SEO, Blogging, Marketing, Automation, AI, Web Auditing Context Hey! I’m Samir, a Supply Chain Engineer and Data Scientist from Paris, and the founder of LogiGreen Consulting. In my personal blog, I share insights on how to use AI, automation, and data analytics to improve logistics, operations, and digital sustainability practices. > Have you heard about accessibility? In this workflow, I use n8n to improve the quality of alternative texts for images on my personal website. 📬 For business inquiries, you can connect with me on LinkedIn Who is this template for? This workflow is for: Bloggers* and *website owners* who want to *improve accessibility** SEO professionals** looking to boost page performance Web developers* and *product teams** automating web audits What does it do? This n8n workflow: 🔍 Downloads the HTML of a blog or web page 🖼️ Extracts all ` tags and their alt` attributes 📉 Detects missing or too-short alt texts 🤖 Sends those images to GPT-4o (with vision) to generate new alt descriptions 📄 Saves the results into a Google Sheet, updating the alt text when needed How it works Set a page URL using the Set node Download HTML content Extract image src and alt using a Code node Store results in a Google Sheet Filter images with altLength < 50 Send image URL to GPT-4o Update the Google Sheet with the newly generated newAlt text The AI alt texts are concise, descriptive, and accessibility-compliant. What do I need to get started? You’ll need: A Google Sheet to store the audit results An OpenAI account with GPT-4o access Follow the Guide! Follow the sticky notes in the workflow or check my tutorial to configure each node and start using AI to improve the accessibility of your website. 🎥 Watch My Tutorial Notes GPT-generated alt texts are limited to ~125–150 characters for best results Use this to comply with WCAG and improve Google indexing Easily adapt it to audit multiple domains or e-commerce catalogues This workflow was built using n8n version 1.85.4 Submitted: April 21, 2025
by Mario
Dynamically switch between LLMs for AI Agents using LangChain Code Purpose This example workflow demonstrates a way to connect multiple LLMs to a single AI Agent/LangChain Node and programmatically use one – or in this case loop through them. What it does This AI workflow takes in customer complaints and generates a response that is being validated before returned. If the answer was not satisfactory, the response will be generated again with a more capable model. How it works A LangChain Code Node allows multiple LLMs to be connected to a single Basic LLM Chain. On every call only one LLM is actually being connected to the Basic LLM Chain, which is determined by the index defined in a previous Node. The AI output is later validated by a Sentiment Analysis Node If the result was not satisfactory, it loops back to the beginning and executes the same query with the next available LLM The loop ends either when the result passed the requirements or when all LLMs have been used before. Setup Clone the workflow and select the belonging credentials. You'll need an OpenAI Account, alternatively you can swap the LLM nodes with ones from a different provider like Anthropic after the import. How to use Beware that the order of the used LLMs is determined by the order they have been added to the workflow, not by the position on the canvas. After cloning this workflow into your environment, open the chat and send this example message: > I really love waiting two weeks just to get a keyboard that doesn’t even work. Great job. Any chance I could actually use the thing I paid for sometime this month? Most likely you will see that the first validation fails, causing it to loop back to the generation node and try again with the next available LLM. Since AI responses are unpredictable, the results and number of tries will differ for each run. Disclaimer Please note, that this workflow can only run on self-hosted n8n instances, since it requires the LangChain Code Node.
by Don Jayamaha Jr
📉 Detect key candlestick reversal patterns and volume divergence on Tesla (TSLA) using GPT-4.1 and real-time OHLCV data. This AI agent evaluates 1-hour and 1-day candles and is an essential part of the Tesla Financial Market Data Analyst Tool. It identifies signals like Doji, Engulfing, Hammer, and volume anomalies to support trade entry and exit logic. ⚠️ Not a standalone template — must be triggered by the Tesla Financial Market Data Analyst Tool 🔐 Requires: Alpha Vantage Premium API Key OpenAI GPT-4.1 access 🔍 What This Agent Does Calls Alpha Vantage to fetch: 🕐 1-hour OHLCV data 📅 1-day OHLCV data GPT-4.1 evaluates: 📊 Candlestick patterns like Doji, Engulfing, Shooting Star 🔄 Volume divergence (price/volume inconsistency) Returns a structured JSON output like: { "summary": "Bearish signs detected on 1-day chart. A shooting star formed on high volume while RSI is elevated. Volume divergence seen on 1h chart as price rises but volume weakens.", "candlestickPatterns": { "1h": "None", "1d": "Shooting Star" }, "volumeDivergence": { "1h": "Bearish", "1d": "None" }, "ohlcv": { "1h": { "close": 174.1, "volume": 1430000, "high": 175.0, "low": 173.8 }, "1d": { "close": 188.3, "volume": 21234000, "high": 189.9, "low": 183.7 } } } 🛠️ Setup Instructions Import the Workflow Name it: Tesla_1hour_and_1day_Klines_Tool Install Dependencies ✅ Tesla Financial Market Data Analyst Tool (this is the trigger parent) Add Required Credentials Alpha Vantage Premium → via HTTP Query Auth OpenAI GPT-4.1 → via OpenAI credentials Verify Web Access This tool fetches data live from Alpha Vantage: /query?function=TIME_SERIES_INTRADAY&interval=60min /query?function=TIME_SERIES_DAILY Run via Execute Workflow Trigger This tool will activate only when called by the Financial Analyst Agent. Inputs: message (optional) sessionId (used for memory continuity) 🧠 Agent Architecture | Component | Description | | ----------------------- | --------------------------------------------------- | | Candlestick Data Hour | Fetches 60min TSLA candles via Alpha Vantage | | Candlestick Data Day | Fetches daily TSLA candles via Alpha Vantage | | OpenAI Chat Model | GPT-4.1 reasoning engine for pattern detection | | Simple Memory | Maintains short-term logic context | | Tesla Klines Agent | LangChain AI agent analyzing both candle and volume | 📌 Sticky Notes Overview 📘 Workflow Purpose 🧠 Short-Term Memory Notes 🔍 1h/1d Data Fetch Logic 📉 Candlestick Pattern Types Detected 📊 Volume Divergence Definitions 🤖 GPT-4.1 Prompt Configuration 🔐 Licensing & Support © 2025 Treasurium Capital Limited Company Logic, pattern reasoning, and prompt structure are proprietary IP. 🔗 Don Jayamaha – LinkedIn 🔗 n8n Creator Profile 🚀 Automate technical edge: detect TSLA candle reversals and volume anomalies with precision using GPT-4.1 and Alpha Vantage. Required by the Tesla Financial Market Data Analyst Tool.