by Yaron Been
This workflow provides automated access to the Creativeathive Lemaar Door Mockedup AI model through the Replicate API. It saves you time by eliminating the need to manually interact with AI models and provides a seamless integration for other generation tasks within your n8n automation workflows. Overview This workflow automatically handles the complete other generation process using the Creativeathive Lemaar Door Mockedup model. It manages API authentication, parameter configuration, request processing, and result retrieval with built-in error handling and retry logic for reliable automation. Model Description: Advanced AI model for automated processing and generation tasks. Key Capabilities Specialized AI model with unique capabilities** Advanced processing and generation features** Custom AI-powered automation tools** Tools Used n8n**: The automation platform that orchestrates the workflow Replicate API**: Access to the Creativeathive/lemaar-door-mockedup AI model Creativeathive Lemaar Door Mockedup**: The core AI model for other generation Built-in Error Handling**: Automatic retry logic and comprehensive error management How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Replicate API: Add your Replicate API token to the 'Set API Token' node Customize Parameters: Adjust the model parameters in the 'Set Other Parameters' node Test the Workflow: Run the workflow with your desired inputs Integrate: Connect this workflow to your existing automation pipelines Use Cases Specialized Processing**: Handle specific AI tasks and workflows Custom Automation**: Implement unique business logic and processing Data Processing**: Transform and analyze various types of data AI Integration**: Add AI capabilities to existing systems and workflows Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Replicate API**: https://replicate.com (Sign up to access powerful AI models) #n8n #automation #ai #replicate #aiautomation #workflow #nocode #aiprocessing #dataprocessing #machinelearning #artificialintelligence #aitools #automation #digitalart #contentcreation #productivity #innovation
by Yaron Been
Description This workflow automatically discovers and collects information about upcoming events in your area or industry. It saves you time by eliminating the need to manually check multiple event websites and provides a centralized database of relevant events. Overview This workflow automatically scrapes websites for upcoming events in your area or industry and compiles them into a structured format. It uses Bright Data to access event listing websites and extract event details like dates, locations, and descriptions. Tools Used n8n:** The automation platform that orchestrates the workflow. Bright Data:** For scraping event websites without being blocked. Calendar/Database:** For storing and organizing event information. How to Install Import the Workflow: Download the .json file and import it into your n8n instance. Configure Bright Data: Add your Bright Data credentials to the Bright Data node. Set Up Data Storage: Configure where you want to store the event data. Customize: Specify locations, event types, and date ranges to monitor. Use Cases Event Planners:** Stay updated on competing or complementary events. Community Managers:** Discover local events to share with your community. Marketing Teams:** Find industry events for networking opportunities. Connect with Me Website:** https://www.nofluff.online YouTube:** https://www.youtube.com/@YaronBeen/videos LinkedIn:** https://www.linkedin.com/in/yaronbeen/ Get Bright Data:** https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #events #eventdiscovery #brightdata #webscraping #eventfinder #localevents #eventcalendar #eventplanning #n8nworkflow #workflow #nocode #eventautomation #eventscraping #eventtracking #upcomingEvents #eventmarketing #eventmanagement #eventdatabase #communityevents #eventnotifications #eventorganizer #eventtech #eventindustry #eventcollection
by Davide
How it Works This workflow automates the process of handling job applications by extracting relevant information from submitted CVs, analyzing the candidate's qualifications against a predefined profile, and storing the results in a Google Sheet. Here’s how it operates: Data Collection and Extraction: The workflow begins with a form submission (On form submission node), which triggers the extraction of data from the uploaded CV file using the Extract from File node. Two informationExtractor nodes (Qualifications and Personal Data) are used to parse specific details such as educational background, work history, skills, city, birthdate, and telephone number from the text content of the CV. Processing and Evaluation: A Merge node combines the extracted personal and qualification data into a single output. This merged data is then passed through a Summarization Chain that generates a concise summary of the candidate’s profile. An HR Expert chain evaluates the candidate against a desired profile (Profile Wanted), assigning a score and providing considerations for hiring. Finally, all collected and processed data including the evaluation results are appended to a Google Sheets document via the Google Sheets node for further review or reporting purposes [[9]]. Set Up Steps To replicate this workflow within your own n8n environment, follow these steps: Configuration: Begin by setting up an n8n instance if you haven't already; you can sign up directly on their website or self-host the application. Import the provided JSON configuration into your n8n workspace. Ensure that all necessary credentials (e.g., Google Drive, Google Sheets, OpenAI API keys) are correctly configured under the Credentials section since some nodes require external service integrations like Google APIs and OpenAI for language processing tasks. Customization: Adjust the parameters of each node according to your specific requirements. For example, modify the fields in the formTrigger node to match what kind of information you wish to collect from applicants. Customize the prompts given to AI models in nodes like Qualifications, Summarization Chain, and HR Expert so they align with the type of analyses you want performed on the candidates' profiles. Update the destination settings in the Google Sheets node to point towards your own spreadsheet where you would like the final outputs recorded. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Yaron Been
This workflow provides automated access to the Vcollos Trefilio AI model through the Replicate API. It saves you time by eliminating the need to manually interact with AI models and provides a seamless integration for other generation tasks within your n8n automation workflows. Overview This workflow automatically handles the complete other generation process using the Vcollos Trefilio model. It manages API authentication, parameter configuration, request processing, and result retrieval with built-in error handling and retry logic for reliable automation. Model Description: Advanced AI model for automated processing and generation tasks. Key Capabilities Specialized AI model with unique capabilities** Advanced processing and generation features** Custom AI-powered automation tools** Tools Used n8n**: The automation platform that orchestrates the workflow Replicate API**: Access to the Vcollos/trefilio AI model Vcollos Trefilio**: The core AI model for other generation Built-in Error Handling**: Automatic retry logic and comprehensive error management How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Replicate API: Add your Replicate API token to the 'Set API Token' node Customize Parameters: Adjust the model parameters in the 'Set Other Parameters' node Test the Workflow: Run the workflow with your desired inputs Integrate: Connect this workflow to your existing automation pipelines Use Cases Specialized Processing**: Handle specific AI tasks and workflows Custom Automation**: Implement unique business logic and processing Data Processing**: Transform and analyze various types of data AI Integration**: Add AI capabilities to existing systems and workflows Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Replicate API**: https://replicate.com (Sign up to access powerful AI models) #n8n #automation #ai #replicate #aiautomation #workflow #nocode #aiprocessing #dataprocessing #machinelearning #artificialintelligence #aitools #automation #digitalart #contentcreation #productivity #innovation
by Yaron Been
This workflow provides automated access to the Moicarmonas 2Ndmoises_Generator AI model through the Replicate API. It saves you time by eliminating the need to manually interact with AI models and provides a seamless integration for other generation tasks within your n8n automation workflows. Overview This workflow automatically handles the complete other generation process using the Moicarmonas 2Ndmoises_Generator model. It manages API authentication, parameter configuration, request processing, and result retrieval with built-in error handling and retry logic for reliable automation. Model Description: Advanced AI model for automated processing and generation tasks. Key Capabilities Specialized AI model with unique capabilities** Advanced processing and generation features** Custom AI-powered automation tools** Tools Used n8n**: The automation platform that orchestrates the workflow Replicate API**: Access to the Moicarmonas/2ndmoises_generator AI model Moicarmonas 2Ndmoises_Generator**: The core AI model for other generation Built-in Error Handling**: Automatic retry logic and comprehensive error management How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Replicate API: Add your Replicate API token to the 'Set API Token' node Customize Parameters: Adjust the model parameters in the 'Set Other Parameters' node Test the Workflow: Run the workflow with your desired inputs Integrate: Connect this workflow to your existing automation pipelines Use Cases Specialized Processing**: Handle specific AI tasks and workflows Custom Automation**: Implement unique business logic and processing Data Processing**: Transform and analyze various types of data AI Integration**: Add AI capabilities to existing systems and workflows Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Replicate API**: https://replicate.com (Sign up to access powerful AI models) #n8n #automation #ai #replicate #aiautomation #workflow #nocode #aiprocessing #dataprocessing #machinelearning #artificialintelligence #aitools #automation #digitalart #contentcreation #productivity #innovation
by Abdullahi Ahmed
Title RAG AI Agent for Documents in Google Drive → Pinecone → OpenAI Chat (n8n workflow) Short Description This n8n workflow implements a Retrieval-Augmented Generation (RAG) pipeline + AI agent, allowing users to drop documents into a Google Drive folder and then ask questions about them via a chatbot. New files are indexed automatically to a Pinecone vector store using OpenAI embeddings; the AI agent loads relevant chunks at query time and answers using context plus memory. Why this workflow matters / what problem it solves Large language models (LLMs) are powerful, but they lack up-to-date, domain-specific knowledge. RAG augments the LLM with relevant external documents, reducing hallucination and enabling precise answers. (Pinecone) This workflow automates the ingestion, embedding, storage, retrieval, and chat logic — with minimal manual work. It’s modular: you can swap data sources, vector DBs, or LLMs (with some adjustments). It leverages the built-in AI Agent node in n8n to tie all the parts together. (n8n) How to get the required credentials | Service | Purpose in Workflow | Setup Link | What you need / steps | | ------------------------- | ------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | | Google Drive (OAuth2) | Trigger new file events & download the file | https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/ | Create a Google Cloud OAuth app, grant it Drive scopes, get client ID & secret, configure redirect URI, paste into n8n credentials. | | Pinecone | Vector database for embeddings | https://docs.n8n.io/integrations/builtin/credentials/pinecone/ | Sign up at Pinecone, in dashboard create an index, get API key + environment, paste into n8n credential. | | OpenAI | Embeddings + chat model | https://docs.n8n.io/integrations/builtin/credentials/openai/ | Log in to OpenAI, generate a secret API key, paste into n8n credentials. | You’ll configure these under n8n → Credentials → New Credential, matching credential names referenced in your workflow nodes. Detailed Walkthrough: How the Workflow Works Here’s a step-by-step of what happens inside your workflow (matching your JSON): 1. Google Drive Trigger Watches a specified folder in Google Drive. Whenever a new file appears (fileCreated event), the workflow is triggered (polling every minute). You must set the folder ID (in “folderToWatch”) to the Drive folder you want to monitor. 2. Download File Takes the file ID from the trigger and downloads the file content (binary). 3. Indexing Path: Embeddings + Storage (This path only runs when new files arrive) The file is sent to the Default Data Loader node (via the Recursive Character Text Splitter) to break it into chunks with overlap (so context is preserved). Each chunk is fed into Embeddings OpenAI to convert text into embedding vectors. Then Pinecone Vector Store (insert mode) ingests the vector + text metadata into your Pinecone index. This ensures your vector store stays up-to-date with files you drop into Drive. 4. Chat / Query Path (Triggered by user chat via webhook) When a chat message arrives via When Chat Message Received, it gets passed into the AI Agent node. Before generation, the AI Agent calls the Pinecone Vector Store1 set in “retrieve-as-tool” mode, which runs a vector-based retrieval using the user query embedding. The relevant text chunks are pulled as tools/context. The OpenAI Chat Model node is linked as the language model for the agent. Simple Memory** node provides conversational memory (keeping history across messages). The agent combines retrieved context + memory + user input and instructs the model to produce a response. 5. Connections / Flow Logic The Embeddings OpenAI node’s output is wired into Pinecone Vector Store (insert) and also into Pinecone Vector Store1 (so the same embeddings can be used for retrieval). The AI Agent has tool access to Pinecone retrieval and memory. The Download File node triggers the insert path. The When chat message triggers the agent path. Similar Workflows / Inspirations & Comparisons To help understand how your workflow fits into what’s already out there, here are a few analogues: n8n Blog: “Build a custom knowledge RAG chatbot”** — they show a workflow that ingests documents from external sources, indexes them in Pinecone, and responds to queries via n8n + LLM. (n8n Blog) Index Documents from Google Drive to Pinecone** — this is nearly identical for the ingestion part: trigger on Drive, split, embed, upload. (n8n) Build & Query RAG System with Google Drive, OpenAI, Pinecone** — shows the full RAG + chat logic, same pattern. (n8n) Chat with GitHub API Documentation (RAG)** — demonstrates converting API spec into chunks, embedding, retrieving, and chatting. (n8n) Community tutorials & forums** talk about using the AI Agent node with tools like Pinecone, and how the RAG part is often built as a sub-workflow feeding an agent. (n8n Community) What sets your workflow apart is your explicit combination: Google Drive → automatic ingestion → chat agent with tool integration + memory. Many templates show either ingestion or chat, but fewer show them combined cleanly with n8n’s AI Agent. Suggested Published Description (you can paste/adjust) > RAG AI Agent for Google Drive Documents (n8n workflow) > > This workflow turns a Google Drive folder into a live, queryable knowledge base. Drop PDF, docx, or text files into the folder → new documents are automatically indexed into a Pinecone vector store using OpenAI embeddings → you can ask questions via a webhook chat interface and the AI agent will retrieve relevant text, combine it with memory, and answer in context. > > Credentials needed > > * Google Drive OAuth2 (see: https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/) > * Pinecone (see: https://docs.n8n.io/integrations/builtin/credentials/pinecone/) > * OpenAI (see: https://docs.n8n.io/integrations/builtin/credentials/openai/) > > How it works > > 1. Drive trigger picks up new files > 2. Download, split, embed, insert into Pinecone > 3. Chat webhook triggers AI Agent > 4. Agent retrieves relevant chunks + memory > 5. Agent uses OpenAI model to craft answer > > This is built on the core RAG pattern (ingest → retrieve → generate) and enhanced by n8n’s AI Agent node for clean tool integration. > > Inspiration & context > This approach follows best practices from existing n8n RAG tutorials and templates, such as the “Index Documents from Google Drive to Pinecone” ingestion workflow and “Build & Query RAG System” templates. (n8n) > > You're free to swap out the data source (e.g. Dropbox, S3) or vector DB (e.g. Qdrant) as long as you adjust the relevant nodes. If you like, I can generate a polished Markdown README for you (with badges, diagrams, instructions) ready for GitHub/n8n community publishing. Do you want me to build that? [1]: https://www.pinecone.io/learn/retrieval-augmented-generation/?utm_source=chatgpt.com "Retrieval-Augmented Generation (RAG) - Pinecone" [2]: https://n8n.io/integrations/agent/?utm_source=chatgpt.com "AI Agent integrations | Workflow automation with n8n" [3]: https://blog.n8n.io/rag-chatbot/?utm_source=chatgpt.com "Build a Custom Knowledge RAG Chatbot using n8n" [4]: https://n8n.io/workflows/4552-index-documents-from-google-drive-to-pinecone-with-openai-embeddings-for-rag/?utm_source=chatgpt.com "Index Documents from Google Drive to Pinecone with OpenAI ... - N8N" [5]: https://n8n.io/workflows/4501-build-and-query-rag-system-with-google-drive-openai-gpt-4o-mini-and-pinecone/?utm_source=chatgpt.com "Build & Query RAG System with Google Drive, OpenAI GPT-4o-mini ..." [6]: https://n8n.io/workflows/2705-chat-with-github-api-documentation-rag-powered-chatbot-with-pinecone-and-openai/?utm_source=chatgpt.com "Chat with GitHub API Documentation: RAG-Powered Chatbot ... - N8N"
by Emmanuel Bernard
Automatically Add Captions to Your Video Who Is This For? This workflow is ideal for content creators, marketers, educators, and businesses that regularly produce video content and want to enhance accessibility and viewer engagement by effortlessly adding subtitles. What Problem Does This Workflow Solve? Manually adding subtitles or captions to videos can be tedious and time-consuming. Accurate captions significantly boost viewer retention, accessibility, and SEO rankings. What Does This Workflow Do? This automated workflow quickly adds accurate subtitles to your video content by leveraging the Json2Video API. It accepts a publicly accessible video URL as input. It makes an HTTP request to Json2Video, where AI analyzes the video, generates captions, and applies them seamlessly. The workflow returns a URL to the final subtitled video. The second part of the workflow periodically checks the Json2Video API to monitor the processing status at intervals of 10 seconds. 👉🏻 Try Json2Video for Free 👈🏻 Key Features Automatic & Synced Captions:** Captions are generated automatically and synchronized perfectly with your video. Fully Customizable Design:** Easily adjust fonts, colors, sizes, and more to match your unique style. Word-by-Word Display:** Supports precise, word-by-word captioning for improved clarity and viewer engagement. Super Fast Processing:** Rapid caption generation saves time, allowing you to focus more on creating great content. Preconditions To use this workflow, you must have: A Json2Video API account. A video hosted at a publicly accessible URL. Why You Need This Workflow Adding subtitles to your videos significantly enhances their reach and effectiveness by: Improving SEO visibility, enabling search engines to effectively index your video content. Enhancing viewer engagement and accessibility, accommodating viewers who watch without sound or who have hearing impairments. Streamlining your content production process, allowing more focus on creativity. Specific Use Cases Social Media Content:** Boost viewer retention by adding subtitles. Educational Videos:** Enhance understanding and improve learning outcomes. Marketing Videos:** Reach broader and more diverse audiences.
by Harshil Agrawal
This workflow demonstrates the use of the $item(index) method. This method is useful when you want to reference an item at a particular index. This example workflow makes POST HTTP requests to a dummy URL. Set node: This node is used to set the API key that will be used in the workflow later. This node returns a single item. This node can be replaced with other nodes, based on the use case. Customer Datastore node: This node returns the data of customers that will be sent in the body of the HTTP request. This node returns 5 items. This node can be replaced with other nodes, based on the use case. HTTP Request node: This node uses the information from both the Set node and the Customer Datastore node. Since, the node will run 5 times, once for each item of the Customer Datastore node, you need to reference the API Key 5 times. However, the Set node returns the API Key only once. Using the expression {{ $item(0).$node["Set"].json["apiKey"] }} you tell n8n to use the same API Key for all the 5 requests.
by Tom
This workflow automatically deletes user data from different apps/services when a specific slash command is issued in Slack. Watch this talk and demo to learn more about this use case. The demo uses Slack, but Mattermost is Slack-compatible, so you can also connect Mattermost in this workflow. Prerequisites Accounts and credentials for the apps/services you want to use. Some basic knowledge of JavaScript. Nodes Webhook node triggers the workflow when a Slack slash command is issued. IF nodes confirm Slack's verification token and verify that the data has the expected format. Set node simplifies the payload. Switch node chooses the correct path for the operation to perform. Respond to Webhook nodes send responses back to Slack. Execute Workflow nodes call sub-workflows tailored to deleting data from each individual service. Function node, Crypto node, and Airtable node generate and store a log entry containing a hash value. HTTP Request node sends the final response back to Slack.
by AI Incarnation
This n8n template empowers IT support teams by automating document ingestion and instant query resolution through a conversational AI. It integrates Google Drive, Pinecone, and a Chat AI agent (using Google Gemini/OpenRouter) to transform static support documents into an interactive, searchable knowledge base. With two interlinked workflows—one for processing support documents and one for handling chat queries—employees receive fast, context-aware answers directly from your support documentation. Overview Document Ingestion Workflow Google Drive Trigger:** Monitors a specified folder for new file uploads (e.g., updated support documents). File Download & Extraction:** Automatically downloads new files and extracts text content. Data Cleaning & Text Splitting:** Utilizes a Code node to remove line breaks, trim extra spaces, and strip special characters, while a text splitter segments the content into manageable chunks. Embedding & Storage:** Generates text embeddings using Google Gemini and stores them in a Pinecone vector store for rapid similarity search. Chat Query Workflow Chat Trigger:** Initiates when an employee sends a support query. Vector Search & Context Retrieval:** Retrieves the top relevant document segments from Pinecone based on similarity scores. Prompt Construction:** A Code node combines the retrieved document snippets with the user’s query into a detailed prompt. AI Agent Response:** The constructed prompt is sent to an AI agent (using OpenRouter Chat Model) to generate a clear, step-by-step solution. Key Benefits & Use Case Imagine a large organization where every IT support document—from troubleshooting guides to system configurations—is stored in a single Google Drive folder. When an employee encounters an issue (e.g., “How do I reset my VPN credentials?”), they simply type the query into a chat interface. Instantly, the workflow retrieves the most relevant context from the ingested documents and provides a detailed, actionable answer. This process reduces resolution times, enhances support consistency, and significantly lightens the load on IT staff. Prerequisites A valid Google Drive account with access to the designated folder. A Pinecone account for storing and retrieving text embeddings. Google Gemini* (or *OpenRouter**) credentials to power the Chat AI agent. An operational n8n instance configured with the necessary nodes and credentials. Workflow Details 1 Document Ingestion Workflow Google Drive Trigger Node:** Listens for file creation events in the specified folder. Google Drive Download Node:** Downloads the newly added file. Extract from File Node:** Extracts text content from the downloaded file. Code Node (Data Cleaning):** Cleans the extracted text by removing line breaks, trimming spaces, and eliminating special characters. Recursive Text Splitter Node:** Segments the cleaned text into manageable chunks. Pinecone Vector Store Node:** Generates embeddings (via Google Gemini) and uploads the chunks to Pinecone. 2 Chat Query Workflow Chat Trigger Node:** Receives incoming user queries. Pinecone Vector Store Node (Query):** Searches for relevant document chunks based on the query. Code Node (Context Builder):** Sorts the retrieved documents by relevance and constructs a prompt merging the context with the query. AI Agent Node:** Sends the prompt to the Chat AI agent, which returns a detailed answer. How to Use Import the Template: Import the template into your n8n instance. Configure the Google Drive Trigger: Set the folder ID (e.g., 1RQvAHIw8cQbtwI9ZvdVV0k0x6TM6H12P) and connect your Google Drive credentials. Set Up Pinecone Nodes: Enter your Pinecone index details and credentials. Configure the Chat AI Agent: Provide your Google Gemini (or OpenRouter) API credentials. Test the Workflows: Validate the document ingestion workflow by uploading a sample support document. Validate the chat query workflow by sending a test query and verifying the returned support information. Additional Notes Ensure all credentials (Google Drive, Pinecone, and Chat AI) are correctly set up and tested before deploying the workflows in production. The template is fully customizable. Adjust the text cleaning, splitting parameters, or the number of document chunks retrieved based on your support documentation's size and structure. This template not only enhances IT support efficiency but also offers a scalable solution for managing and leveraging growing volumes of support content.
by bangank36
This workflow retrieves all users from n8n, compares them against entries in a Google Sheets spreadsheet, and automatically creates new users when needed. Once new users are created, invitation emails are sent automatically. You can trigger the workflow manually or set it to run on a schedule to ensure continuous synchronization. Spreadsheet Template This workflow is designed to work with a Google Sheets structure inspired by Squarespace's newsletter block connection. You can modify the node settings to adapt to a different column format. 👉 Clone the sample sheet here Suggested columns: Submitted On Email Address Name Requirements Credentials To use this workflow, you need: n8n API Key – to update users from n8n. Google Sheets API credentials – Required to get data from a spreadsheet. Configure Your n8n Instance To make this workflow work with your n8n instance, update the API endpoint: 🔧 Edit Global node 👇 Change n8n_url to match your instance URL: Authentication Guide Explore More Templates 👉 Check out my other n8n templates
by Don Jayamaha Jr
Get deep insights into NFT market trends, sales data, and collection statistics—all powered by AI and OpenSea! This workflow connects GPT-4o-mini, OpenSea API, and n8n automation to provide real-time analytics on NFT collections, wallet transactions, and market trends. It is ideal for NFT traders, collectors, and investors looking to make informed decisions based on structured data. How It Works Receives user queries via Telegram, webhooks, or another connected interface. Determines the correct API tool based on the request (e.g., collection stats, wallet transactions, event tracking). Retrieves data from OpenSea API (requires API key). Processes the information using an AI-powered analytics agent. Returns structured insights in an easy-to-read format for quick decision-making. What You Can Do with This Agent 🔹 Retrieve NFT Collection Stats → Get floor price, volume, sales data, and market cap. 🔹 Track Wallet Activity → Analyze transactions for a given wallet address. 🔹 Monitor NFT Market Trends → Track historical sales, listings, bids, and transfers. 🔹 Compare Collection Performance → View side-by-side market data for different NFT projects. 🔹 Analyze NFT Transaction History → Check real-time ownership changes for any NFT. 🔹 Identify Market Shifts → Detect sudden spikes in demand, price changes, and whale movements. Example Queries You Can Use ✅ "Get stats for the Bored Ape Yacht Club collection." ✅ "Show me all NFT sales from the last 24 hours." ✅ "Fetch all NFT transfers for wallet 0x123...abc on Ethereum." ✅ "Compare the last 3 months of sales volume for Azuki and CloneX." ✅ "Track the top 10 wallets making the most NFT purchases this week." Available API Tools & Endpoints 1️⃣ Get Collection Stats → /api/v2/collections/{collection_slug}/stats (Retrieve NFT collection-wide market data) 2️⃣ Get Events → /api/v2/events (Fetch global NFT sales, transfers, listings, bids, redemptions) 3️⃣ Get Events by Account → /api/v2/events/accounts/{address} (Track transactions by wallet) 4️⃣ Get Events by Collection → /api/v2/events/collection/{collection_slug} (Get sales activity for a collection) 5️⃣ Get Events by NFT → /api/v2/events/chain/{chain}/contract/{address}/nfts/{identifier} (Retrieve historical transactions for a specific NFT) Set Up Steps Get an OpenSea API Key Sign up at OpenSea API and request an API key. Configure API Credentials in n8n Add your OpenSea API key under HTTP Header Authentication. Connect the Workflow to Telegram, Slack, or Database (Optional) Use n8n integrations to send alerts to Telegram, Slack, or save results to Google Sheets, Notion, etc. Deploy and Test Send a query (e.g., "Azuki latest sales") and receive instant NFT market insights! Stay ahead in the NFT market—get real-time analytics with OpenSea’s AI-powered analytics agent!