by Mihai Farcas
This workflow demonstrates a Retrieval Augmented Generation (RAG) chatbot that lets you chat with the GitHub API Specification (documentation) using natural language. Built with n8n, OpenAI's LLMs and the Pinecone vector database, it provides accurate and context-aware responses to your questions about how to use the GitHub API. You could adapt this to any OpenAPI specification for any public or private API, thus creating a documentation chatbout that anyone in your company can use. How it works: Data Ingestion: The workflow fetches the complete GitHub API OpenAPI 3 specification directly from the GitHub repository. Chunking and Embeddings: It splits the large API spec into smaller, manageable chunks. OpenAI's embedding models then generate vector embeddings for each chunk, capturing their semantic meaning. Vector Database Storage: These embeddings, along with the corresponding text chunks, are stored in a Pinecone vector database. Chat Interface and Query Processing: The workflow provides a simple chat interface. When you ask a question, it generates an embedding for your query using the same OpenAI model. Semantic Search and Retrieval: Pinecone is queried to find the most relevant text chunks from the API spec based on the query embedding. Response Generation: The retrieved chunks and your original question are fed to OpenAI's gpt-4o-mini LLM, which generates a concise, informative, and contextually relevant answer, including code snippets when applicable. Set up steps: Create accounts: You'll need accounts with OpenAI and Pinecone. API keys: Obtain API keys for both services. Configure credentials: In your n8n environment, configure credentials for OpenAI and Pinecone using your API keys. Import the workflow: Import this workflow into your n8n instance. Pinecone Index: Ensure you have a Pinecone index named "n8n-demo" or adjust the workflow accordingly. The workflow is set up to work with this index out of the box. Setup Time: Approximately 15-20 minutes. Why use this workflow? Learn RAG in Action: This is a practical, hands-on example of how to build a RAG-powered chatbot. Adaptable Template: Easily modify this workflow to create chatbots for other APIs or knowledge bases. n8n Made Easy: See how n8n simplifies complex integrations between data sources, vector databases, and LLMs.
by Marketing Canopy
Automate Pinterest Analysis & AI-Powered Content Suggestions With Pinterest API This workflow automates the collection, analysis, and summarization of Pinterest Pin data to help marketers optimize content strategy. It gathers Pinterest Pin performance data, analyzes trends using an AI agent, and delivers actionable insights to the Marketing Manager via email. This setup is ideal for content creators and marketing teams who need weekly insights on Pinterest trends to refine their content calendar and audience engagement strategy. Prerequisites Before setting up this workflow, ensure you have the following: Pinterest API Access & Developer Account Sign up at Pinterest Developers and obtain API credentials. Ensure you have access to both Organic and Paid Pin data. Airtable Account & API Key Create an account at Airtable and set up a database. Obtain an API key from Account Settings. AI Agent for Trend Analysis An AI-powered agent (such as OpenAI's GPT or a custom ML model) is required to analyze Pinterest trends. Ensure integration with your workflow automation tool (e.g., Zapier, Make, or a custom Python script). Email Automation Setup Configure an SMTP email service (e.g., Gmail, Outlook, SendGrid) to send the summarized results to the Marketing Manager. Step-by-Step Guide to Automating Pinterest Pin Analysis 1. Scheduled Trigger for Data Collection At 8:00 AM (or your preferred time), an automated trigger starts the workflow. Adjust the timing based on your marketing schedule to optimize trend tracking. 2. Fetch Data from Pinterest API Retrieve recent Pinterest Pin performance data, including impressions, clicks, saves, and engagement rate. Ensure both Organic and Paid Ads data are labeled correctly for clarity. 3. Store Data in Airtable Pins are logged and categorized in an Airtable database for further analysis. Sample Airtable Template for Pinterest Pins | Column Name | Description | |---------------|---------------------------------------| | pin_id | Unique identifier for each Pin | | created_at | Timestamp of when the Pin was created | | title | Title of the Pin | | description| Short description of the Pin | | link | URL linking to the Pin | | type | Type of Pin (e.g., organic, ad) | 4. AI Agent Analyzes Pinterest Trends The AI model reviews the latest Pinterest data and identifies: Trending Topics & Keywords** Engagement Patterns** Audience Interests & Behavior Changes** Optimal Posting Times & Formats** 5. Generate Content Suggestions with AI The AI Agent recommends new Pin ideas and content calendar updates to maximize engagement. Suggestions include creative formats, hashtags, and timing adjustments for better performance. 6. Summary & Insights Generated by AI A concise report is created, summarizing Pinterest trends and actionable insights for content strategy. 7. Email Report Sent to the Marketing Manager The summary is emailed to the Marketing Manager to assist with content planning and execution. The report includes: Performance Overview of Recent Pins Trending Content Ideas Best Performing Pin Formats AI-Generated Recommendations This workflow enables marketing teams to automate Pinterest analysis and optimize their content strategy through AI-driven insights. 🚀
by Juan Carlos Cavero Gracia
Description This automation template is designed for content creators, digital marketers, and social media managers looking to simplify their video posting workflow. It automates the process of generating engaging video descriptions and uploading content to both Instagram and TikTok, making your social media management more efficient and error-free. Who Is This For? Content Creators & Influencers:** Streamline your video uploads and focus more on creating content. Digital Marketers:** Ensure consistent posting across multiple platforms with minimal manual intervention. Social Media Managers:** Automate repetitive tasks and maintain a steady online presence. What Problem Does This Workflow Solve? Manually creating descriptions and uploading videos to different platforms can be time-consuming and error-prone. This workflow addresses these challenges by: Automating Video Uploads:** Monitors a designated Google Drive folder for new videos. Generating Descriptions:** Uses OpenAI to transcribe video audio and generate engaging, customized social media descriptions. Ensuring Multi-Platform Consistency:** Simultaneously posts your video with the generated description to Instagram and TikTok. Error Notifications:** Optional Telegram integration sends alerts in case of issues, ensuring smooth operations. How It Works Video Upload: Place your video in the designated Google Drive folder. Description Generation: The automation triggers OpenAI to transcribe your video’s audio and generate a captivating description. Content Distribution: Automatically uploads the video and description to both Instagram and TikTok. Error Handling: Sends Telegram notifications if any issues arise during the process. Setup Generate an API token at upload-post.com and configure it in both the Upload to TikTok and Upload to Instagram nodes. Google Cloud Project: Create a project in Google Cloud Platform, enable the Google Drive API, and generate the necessary OAuth credentials to connect to your Google Drive account. Set up your Google Drive folder in the Google Drive Trigger node. Customize the OpenAI prompt in the Generate Social Description node to match your brand’s tone. (Optional) Configure Telegram credentials for error notifications. Requirements Accounts:** upload-post.com, Google Drive, and (optionally) Telegram. API Keys & Credentials:** Upload-post.com API token, OpenAI API key, and (optional) Telegram bot token. Google Cloud:** A project with the Google Drive API enabled and valid OAuth credentials. Use this template to enhance your productivity, maintain consistency across your social media channels, and engage your audience with high-quality video content.
by Amjid Ali
This n8n workflow automates YouTube video metadata generation using AI. It extracts video transcripts, analyzes content, and produces optimized titles, descriptions, tags, hashtags, and call-to-action elements. Additionally, the workflow integrates affiliate and promotional links to enhance overall video performance. Key Features Automated Metadata Generation Utilizes an AI agent integrated with OpenAI GPT-4 to generate engaging metadata based on the provided video transcript. SEO and Engagement Optimization Creates keyword-rich, well-structured content that boosts search engine visibility and audience engagement. Affiliate and Promotional Integration Retrieves pre-set promotional and affiliate links using a Google Docs integration. Direct YouTube Update Automatically updates video details on YouTube via the YouTube API. Customization Allows you to modify the AI prompt to tailor metadata for your specific niche. Workflow Breakdown User Submission Users supply the YouTube video link, transcript, and optionally, focus keywords. Video ID Extraction The workflow converts the YouTube URL into a video ID to streamline automation. Link Retrieval Affiliate and course links are fetched from a designated Google Docs file. AI-Powered Metadata Generation The AI agent generates the video title, description, tags, hashtags, and call-to-action elements. Metadata Formatting and Update The generated metadata is structured and directly updated on YouTube. Confirmation A success message is displayed upon completion of the update process. Setup and Configuration Deploying the Workflow Deploy the workflow in n8n and ensure all integrations are properly set up. Configuring Integrations Google Docs:** Configure credentials to retrieve affiliate and promotional links. OpenAI (GPT-4):** Set up credentials for AI-powered metadata generation. YouTube API:** Enter your API credentials to enable automatic video updates. User Input Requirements Provide a valid YouTube video link and its corresponding transcript. Optionally, include focus keywords to further enhance metadata accuracy. Ideal For YouTube Content Creators:** Automate video descriptions and boost SEO. Digital Marketers:** Enhance content for improved search rankings and audience engagement. Affiliate Marketers:** Simplify the insertion of promotional and affiliate links. AI & Automation Enthusiasts:** Explore the integration of AI into automated workflows. Additional Resources For further guidance, refer to the tutorial video on this workflow. More courses and resources are available on the SyncBricks website. For support or inquiries, contact Amjid Ali at info@syncbricks.com. You can also support this work via PayPal donations and subscribe for additional AI and automation workflows. Watch the Tutorial:** YouTube Video on This Workflow More Courses & Resources:** SyncBricks LMS Full Course on ERPNext & AI Automation Connect:** Email: info@syncbricks.com Website: SyncBricks YouTube: SyncBricks Channel LinkedIn: Amjid Ali Support & Subscribe:** Donate via PayPal Subscribe for More AI & Automation Workflows
by explorium
Explorium Event-Triggered Outreach This n8n and agent-based workflow automates outbound prospecting by monitoring Explorium event data (e.g. product launches, new office opening, new investment and more), researching companies, identifying key contacts, and generating tailored sales emails leveraging the Explorium MCP server. Template Workflow Overview Node 1: Webhook Trigger Purpose: Listens for real-time product launch events pushed from Explorium's webhook system. How it works: Explorium sends HTTP POST requests containing event data The webhook payload includes company name, business ID, domain, product name, and event type Pay attention: Product launch is just one example. You can easily enroll to many more meaningful events. to learn about events and how to enroll to events, visit the events documentation. Node 2: Company Research Agent Agent Type: Tools Agent Purpose: Enrich company data after an event occurs. How it works: Uses Explorium MCP via the MCP Client tool to gather additional company data Uses Anthropic Claude (Chat Model) to process and interpret company information for downstream personalization Node 3: Employee Data Retrieval Purpose: Retrieve prospect-level data for targeting. How it works: Uses HTTP Request node to call Explorium's fetch_prospects endpoint Filters prospects by: Company business_id Departments: Product, R&D, etc... Seniority levels: owner, cxo, vp, director, senior, manager, partner, etc... Pay Attention: Follow our fetch prospect documentation for the full list of filter and best practice. Limits results to top 5 relevant employees Code nodes handle: Filtering logic Cleaning API response Formatting data for downstream agents Node 4: Conditional Branch - Prospect Data Check If Node: Checks whether prospect data was successfully retrieved Logic: If prospects found → personalized emails per person If no prospects → fallback to company-level general email Node 5A: Email Writer #1 (No Prospect Data) Agent Type: Tools Agent Purpose: Write generic outbound email using only company-level research and event info. Powered by: Anthropic Chat Model Node 5B: Loop Over Prospects → Email Writer #2 (Personalized) Agent Type: Tools Agent Purpose: Write highly personalized email for each identified employee. How it works: Loops through each individual prospect Passes company research + employee data to LLM agent Generates customized emails referencing: Prospect's title & department Product launch Role-relevant Explorium value proposition Node 6: Slack Notifications Purpose: Posts completed emails to internal Slack channel for review or testing before final deployment. Future State: Can be swapped with an email sequencing platform in production. Setup Requirements Explorium API Access MCP Client credentials for company enrichment and prospect fetching Registered webhook for event listening Get explorium api key n8n Configuration Secure environment variables for API keys & webhook secret Code nodes configured for JSON transformation, filtering & signature validation Customization Options Personalization Logic Update LLM prompt instructions to reflect ICP priorities Modify email templates based on role, department, or tenure logic Adjust fallback behavior when prospect data is unavailable API Request Tuning Adjust page_size for number of prospects retrieved Fine-tune seniority and department filters to match evolving targeting Future Expansion Swap Slack notifications for outbound email automation Integrate call task assignment directly into CRM Introduce engagement scoring feedback loop (opens, clicks, replies) Troubleshooting Tips Validate webhook signature matching to prevent unauthorized requests Ensure correct business_id is passed to prospect fetching endpoint Confirm business enrichment returns sufficient data for company researcher agents Review agent LLM responses for correct output structure and parsing consistency
by Joseph LePage
Who is this for? This workflow template is designed for AI enthusiasts, developers, and privacy-conscious users who want to leverage the power of local large language models (LLMs) without sending data to external services. It's particularly valuable for those running Ollama locally who want intelligent routing between different specialized models. What problem is this workflow solving? When working with multiple local LLMs, each with different strengths and capabilities, it can be challenging to manually select the right model for each specific task. This workflow automatically analyzes user prompts and routes them to the most appropriate specialized Ollama model, ensuring optimal performance without requiring technical knowledge from the end user. What this workflow does This intelligent router: Analyzes incoming user prompts to determine the nature of the request Automatically selects the optimal Ollama model from your local collection based on task requirements Routes requests between specialized models for different tasks: Text-only models (qwq, llama3.2, phi4) for various reasoning and conversation tasks Code-specific models (qwen2.5-coder) for programming assistance Vision-capable models (granite3.2-vision, llama3.2-vision) for image analysis Maintains conversation memory for consistent interactions Processes everything locally for complete privacy and data security Setup Ensure you have Ollama installed and running locally Pull the required models mentioned in the workflow using Ollama CLI (e.g., ollama pull phi4) Configure the Ollama API credentials in n8n (default: http://127.0.0.1:11434) Activate the workflow and start interacting through the chat interface How to customize this workflow to your needs Add or remove models from the router's decision framework based on your specific Ollama collection Adjust the system prompts in the LLM Router to prioritize different model selection criteria Modify the decision tree logic to better suit your specific use cases Add additional preprocessing steps for specialized inputs This workflow demonstrates how n8n can be used to create sophisticated AI orchestration systems that respect user privacy by keeping everything local while still providing intelligent model selection capabilities.
by Francis Njenga
Workflow Documentation: Auto-Retry Engine – Error Recovery Workflow Detailed Description The Auto-Retry Engine: Error Recovery Workflow is designed to automate the process of identifying and retrying failed executions in n8n workflows. By leveraging scheduled triggers, API integrations, and conditional logic, this workflow ensures that any failed executions are automatically retried on an hourly basis. This reduces manual intervention, improves system reliability, and ensures smoother workflow operations. Who is this for? This workflow is ideal for: Automation Engineers**: Managing and maintaining workflows with minimal manual intervention. DevOps Teams**: Ensuring high availability and reliability of automated processes. IT Administrators**: Reducing downtime and improving system performance by automating error recovery. What problem does this workflow solve? Manual Error Handling**: Eliminates the need for manual monitoring and retrying of failed executions. Improved Reliability**: Automatically retries failed executions, reducing downtime and improving workflow success rates. Time Efficiency**: Saves time by automating repetitive error recovery tasks, allowing teams to focus on higher-priority work. What this workflow does This workflow automates the following steps: Scheduled Monitoring: Checks for failed executions hourly using a schedule trigger. Error Filtering: Identifies executions that have failed and filters out those that have already been successfully retried. Authentication: Logs into the n8n instance using API credentials to retrieve session details. Automatic Retry: Retries the failed executions using the n8n API. Batch Processing: Processes multiple failed executions in batches to avoid overloading the system. Setup Prerequisites To use this workflow, you’ll need: n8n Account**: To create and run the workflow. n8n API Credentials**: For logging into the n8n instance and retrying executions. HTTP Request Node**: Configured to interact with the n8n API. Schedule Trigger**: Set to run the workflow hourly. Setup Process Configure Schedule Trigger Set the trigger to run hourly to check for failed executions. Set Login Credentials Add your n8n instance URL, username, and password in the Set Node. Integrate n8n API Use the HTTP Request node to log into the n8n instance and retrieve session details. Retry Failed Executions Configure the HTTP Request node to retry failed executions using the session details. Batch Processing Use the Split in Batches node to process multiple failed executions in batches. How to customize this workflow Tailor the workflow to fit your specific needs: Adjust Schedule Frequency** Modify the schedule trigger to run at different intervals (e.g., every 30 minutes). Add Notifications** Integrate email or Slack notifications to alert teams about failed retries. Refine Error Filtering** Customize the filtering logic to exclude specific types of failed executions. Scale Batch Size** Adjust the batch size in the Split in Batches node to optimize performance. Conclusion The Auto-Retry Engine: Error Recovery Workflow is a powerful tool for automating error recovery in n8n workflows. By reducing manual intervention and ensuring failed executions are retried automatically, this workflow enhances system reliability and operational efficiency. Whether you're managing a few workflows or a complex automation ecosystem, this workflow ensures your processes run smoothly and consistently.
by Niklas Hatje
Who this is for This template is for everyone that wants to download their n8n Cloud invoices automatically as a PDF instead of downloading them manually. How it works This workflow checks your Gmail inbox for new n8n invoice emails from n8n's payment provider Paddle. Once it finds something, it converts the URL into a PDF using pdflayer and saves it in Google Drive. Setup Setup your Gmail and Google Drive credentials Create a free account at https://pdflayer.com/ Insert your pdflayer API key into the Setup node Insert the URL to the wanted drive folder into the setup node (make sure to remove everything after the ?) How to adjust it to your need Instead of saving the PDF in Google drive, you could also save it in your local system, any other storage provider or send the PDF automatically to the right person in your company.
by Onur
Description This workflow empowers you to effortlessly get answers to your n8n platform questions through an AI-powered assistant. Simply send your query, and the assistant will search documentation, forum posts, and example workflows to provide comprehensive, accurate responses tailored to your specific needs. > Note: This workflow uses community nodes (n8n-nodes-mcp.mcpClientTool) and will only work on self-hosted n8n instances. You'll need to install the required community nodes before importing this workflow. ! What does this workflow do? This workflow streamlines the information retrieval process by automatically researching n8n platform documentation, community forums, and example workflows, providing you with relevant answers to your questions. Who is this for? New n8n Users**: Quickly get answers to basic platform questions and learn how to use n8n effectively Experienced Developers**: Find solutions to specific technical issues or discover advanced workflows Teams**: Boost productivity by automating the research process for n8n platform questions Anyone** looking to leverage AI for efficient and accurate n8n platform knowledge retrieval Benefits Effortless Research**: Automate the research process across n8n documentation, forum posts, and example workflows AI-Powered Intelligence**: Leverage the power of LLMs to understand context and generate helpful responses Increased Efficiency**: Save time and resources by automating the research process Quick Solutions**: Get immediate answers to your n8n platform questions Enhanced Learning**: Discover new workflows, features, and best practices to improve your n8n experience How It Works Receive Request: The workflow starts when a chat message is received containing your n8n-related question AI Processing: The AI agent powered by OpenAI GPT-4o analyzes your question Research and Information Gathering: The system searches across multiple sources: Official n8n documentation for general knowledge and how-to guides Community forums for bug reports and specific issues Example workflow repository for relevant implementations Response Generation: The AI agent compiles the research and generates a clear, comprehensive answer Output: The workflow provides you with the relevant information and step-by-step guidance when applicable n8n Nodes Used When chat message received (Chat Trigger) OpenAI Chat Model (GPT-4o mini) N8N AI Agent n8n-assistant tools (MCP Client Tool - Community Node) n8n-assistant execute (MCP Client Tool - Community Node) Prerequisites Self-hosted n8n instance OpenAI API credentials MCP client community node installed MCP server configured to search n8n resources Setup Import the workflow JSON into your n8n instance Configure the OpenAI credentials Configure your MCP client API credentials In the n8n-assistant execute node, ensure the parameter is set to "specific" (corrected from "spesific") Test the workflow by sending a message with an n8n-related question MCP Server Connection To connect to the MCP server that powers this assistant's research capabilities, you need to use the following URL: https://smithery.ai/server/@onurpolat05/n8n-assistant This MCP server is specifically designed to search across three types of n8n resources: Official documentation for general platform information and workflow creation guidance Community forums for bug-related issues and troubleshooting Example workflow repositories for reference implementations Configure this URL in your MCP client credentials to enable the assistant to retrieve relevant information based on user queries. This workflow combines the convenience of chat with the power of AI to provide a seamless n8n platform research experience. Start getting instant answers to your n8n questions today!
by Dale Dunlop
WebSecScan: AI-Powered Website Security Auditor This n8n workflow provides comprehensive website security analysis by leveraging OpenAI's models to detect vulnerabilities, configuration issues, and security misconfigurations. The workflow generates a professional HTML security report delivered directly via Gmail. Key Features Dual-Layer Security Analysis:** Performs parallel security audits using specialized OpenAI agents: Header Configuration Audit: Analyzes HTTP headers, CORS policies, CSP implementation, and cookie security Vulnerability Assessment: Identifies XSS vectors, information disclosure, and client-side weaknesses Detailed Security Grading:** Automatically calculates a security grade (A+ to F) based on findings severity and quantity Professional Report Generation:** Creates a comprehensive HTML report with: Security grade visualization Color-coded vulnerability categories Detailed recommendations with example configuration fixes Header presence/absence indicators Implementation guidance for remediation Non-Invasive Testing:** Performs analysis without active scanning or exploitation attempts Technical Implementation Multi-Agent Architecture:** Utilizes two specialized OpenAI agents with custom prompts tailored for security analysis Advanced Header Analysis:** Detects presence and proper implementation of critical security headers: Content-Security-Policy Strict-Transport-Security X-Content-Type-Options X-Frame-Options Referrer-Policy Permissions-Policy Intelligent Issue Detection:** Uses JavaScript processing to analyze OpenAI outputs and count critical/warning issues Responsive HTML Report:** Dynamically generates a mobile-friendly report with detailed findings and recommendations Setup Requirements 1. OpenAI API Configuration Create an OpenAI API key at platform.openai.com In n8n, go to Settings → Credentials → New → OpenAI API Enter your API key and save 2. Gmail Integration Navigate to Settings → Credentials → New → Gmail OAuth2 API Complete the OAuth authentication flow Configure recipient email in the "Send Security Report" node 3. Workflow Customization (Optional) Modify the form title/description in the Landing Page node Upgrade from gpt-4o-mini to gpt-4o for more comprehensive analysis Add additional recipients to the email report Usage Instructions Activate the workflow and access the form via the generated URL Enter any website URL to analyze (including the http:// or https:// prefix) Receive a detailed security report via email within minutes Share findings with your development team to implement fixes This workflow represents a non-invasive security assessment tool. For production environments, complement with professional penetration testing services.
by Don Jayamaha Jr
Analyze exchange data, market indexes, and community sentiment from CoinMarketCap—powered by AI. This sub-agent provides access to exchange listings, token holdings, metadata, and high-level metrics like the CMC 100 Index and the Fear & Greed Index. It’s designed for use within your larger CoinMarketCap AI Analyst system or as a standalone workflow. This agent can be triggered by a supervisor or manually used with message and sessionId inputs. Supported Tools (5 Total) 🔍 Exchange Map Get CoinMarketCap IDs, names, and slugs for exchanges (used as lookup before deeper queries). 🧾 Exchange Info Metadata including launch date, social links, country, and operational status. 💰 Exchange Assets Token balances, wallet addresses, and total USD value held by a specific exchange. 📈 CoinMarketCap 100 Index Constituents and weights of the CMC 100 Index, updated live. 😱 Fear & Greed Index Market sentiment score updated daily, ranging from Extreme Fear to Extreme Greed. What You Can Do with This Agent 🔹 Map exchanges to retrieve their ID and slug 🔹 Analyze exchange holdings by token and blockchain 🔹 Pull metadata for major CEXs like Binance or Coinbase 🔹 Compare global sentiment using the Fear & Greed Index 🔹 Access index data to understand CMC’s top 100 crypto asset breakdown Example Queries You Can Use ✅ "What is the latest Fear and Greed Index reading?" ✅ "Get a list of all exchanges on CoinMarketCap." ✅ "What tokens are held by Binance?" ✅ "Retrieve metadata for Coinbase." ✅ "Show me the top assets in the CMC 100 Index." Agent Architecture AI Brain**: GPT-4o-mini Memory**: Window buffer memory using sessionId Tools**: 5 API-connected nodes Trigger**: External input via message and sessionId Setup Instructions Get a CoinMarketCap API Key Apply here: https://coinmarketcap.com/api/ Configure n8n Credentials Use HTTP Header Auth to store your CoinMarketCap API key. Optional: Trigger from a Supervisor Connect to a parent agent using Execute Workflow with message and sessionId inputs. Test Sample Prompts “Get all exchanges”, “Fetch CMC index”, “Show Binance token holdings” Sticky Notes Included Exchange & Community Guide – Explains agent purpose and component connections Usage & Examples – Walkthrough for sample use cases Error Handling & Licensing – Includes API error code reference and licensing details ✅ Final Notes This agent is part of a broader CoinMarketCap AI Analyst System. Visit my Creator profile to download all available sub-agents and supervisor flows. Understand exchange behavior and community sentiment—automated with AI and CoinMarketCap.
by Don Jayamaha Jr
Meet your AI-powered crypto data analyst—fully integrated with CoinMarketCap APIs. This workflow acts as the supervisor agent for a multi-agent architecture built in n8n, connecting three powerful sub-agents to extract real-time insights from centralized and decentralized markets. It’s the ultimate tool for crypto traders, analysts, developers, and researchers who need strategic multi-source intelligence—all through Telegram. This workflow requires 3 sub-agent templates to function correctly. See below. 🔌 Required Sub-Workflows (Install First) CoinMarketCap Crypto Agent Tool → Token prices, metadata, conversions, listings CoinMarketCap Exchange & Community Agent Tool → Exchange info, token holdings, Fear & Greed index CoinMarketCap DEXScan Agent Tool → DEX trading pairs, liquidity, OHLCV data Download all from my Creator Profile: https://n8n.io/creators/don-the-gem-dealer/ What Makes This Workflow Special? This is not just another API wrapper—it’s an intelligent routing agent powered by GPT-4o-mini, capable of: Understanding complex user queries Choosing the appropriate tool workflow Structuring the API request Executing sub-workflows Formatting the output Returning insights via Telegram It connects three domains of market data: Cryptocurrencies (CEX)** Exchanges & Sentiment** DEX trading data** 🔍 What You Can Do 💰 Token Intelligence Get token metadata, price, volume, supply Compare rankings and conversions 🏦 Exchange Insights View assets held by exchanges Track the CMC 100 Index and Fear & Greed Score 🌐 DEX Market Analysis Analyze pair quotes, historical OHLCV, live trades Discover the top DEXs by volume across blockchains ✅ Example Questions to Ask “What’s the market cap of Ethereum today?” “Show liquidity and volume for SOL/USDT on Solana” “Get token holdings for Binance” “Compare BTC price on Uniswap vs Binance” “What’s the Fear & Greed index right now?” 🛠️ Setup Instructions Create Telegram Bot Use @BotFather to get your bot token. Get CoinMarketCap API Key Apply here: https://coinmarketcap.com/api/ Install Sub-Agent Templates Required: Crypto Agent Tool Exchange & Community Tool DEXScan Tool Configure Credentials in n8n Add both Telegram and CoinMarketCap keys as HTTP Header Auth. Deploy & Test Ask your Telegram bot: “Top 10 tokens by 24h volume” or “Convert 5 ETH to USD” Workflow Architecture AI Brain**: GPT-4o-mini Memory**: Windowed buffer memory via sessionId Tool Agents**: toolWorkflow() → routes requests to the appropriate sub-agent Executes real-time API queries and returns structured output Included Sticky Notes System Overview** Error Handling Guide (200, 400, 401, 429, 500)** Step-by-Step Usage Instructions** Prompt Examples + API Docs** Legal & Licensing Notes** Your crypto insights—smarter, faster, and all in one Telegram message.