by Amjid Ali
Detailed Title "Triathlon Coach AI Workflow: Strava Data Analysis and Personalized Training Insights using n8n" Description This n8n workflow enables you to build an AI-driven virtual triathlon coach that seamlessly integrates with Strava to analyze activity data and provide athletes with actionable training insights. The workflow processes data from activities like swimming, cycling, and running, delivers personalized feedback, and sends motivational and performance improvement advice via email or WhatsApp. Workflow Details Trigger: Strava Activity Updates Node:** Strava Trigger Purpose:** Captures updates from Strava whenever an activity is recorded or modified. The data includes metrics like distance, pace, elevation, heart rate, and more. Integration:** Uses Strava API for real-time synchronization. Step 1: Data Preprocessing Node:** Code Purpose:** Combines and flattens the raw Strava activity data into a structured format for easier processing in subsequent nodes. Logic:** A recursive function flattens JSON input to create a clean and readable structure. Step 2: AI Analysis with Google Gemini Node:** Google Gemini Chat Model Purpose:** Leverages Google Gemini's advanced language model to analyze the activity data. Functionality:** Identifies key performance metrics. Provides feedback and insights specific to the type of activity (e.g., running, swimming, or cycling). Offers tailored recommendations and motivational advice. Step 3: Generate Structured Output Node:** Structure Output Purpose:** Processes the AI-generated response to create a structured format, such as headings, paragraphs, and bullet lists. Output:** Formats the response for clear communication. Step 4: Convert to HTML Node:** Convert to HTML Purpose:** Converts the structured output into an HTML format suitable for email or other presentation methods. Output:** Ensures the response is visually appealing and easy to understand. Step 5: Send Email with Training Insights Node:** Send Email Purpose:** Sends a detailed email to the athlete with performance insights, training recommendations, and motivational messages. Integration:** Utilizes Gmail or SMTP for secure and efficient email delivery. Optional Step: WhatsApp Notifications Node:** WhatsApp Business Cloud Purpose:** Sends a summary of the activity analysis and key recommendations via WhatsApp for instant access. Integration:** Connects to WhatsApp Business Cloud for automated messaging. Additional Notes Customization: You can modify the AI prompt to adapt the recommendations to the athlete's specific goals or fitness levels. The workflow is flexible and can accommodate additional nodes for more advanced analysis or output formats. Scalability: Ideal for individual athletes or coaches managing multiple athletes. Can be expanded to include additional metrics or insights based on user preferences. Performance Metrics Handled: Swimming: SWOLF, stroke count, pace. Cycling: Cadence, power zones, elevation. Running: Pacing, stride length, heart rate zones. Implementation Steps Set Up Strava API Key: Log in to Strava Developers to generate your API key. Integrate the API key into the Strava Trigger node. Configure Google Gemini Integration: Use your Google Gemini (PaLM) API credentials in the Google Gemini Chat Model node. Customize Email and WhatsApp Messaging: Update the Send Email and WhatsApp Business Cloud nodes with the recipient’s details. Automate Execution: Deploy the workflow and use n8n's scheduling features or cron jobs for periodic execution. GET n8n Now N8N COURSE n8n Book Developer Notes Author:** Amjid Ali improvements. Resources:** See in Action: Syncbricks Youtube PayPal: Support the Developer Courses : SyncBricks LMS By using this workflow, triathletes and coaches can elevate training to the next level with AI-powered insights and actionable recommendations.
by Yaron Been
Description This workflow automatically discovers and collects information about upcoming events in your area or industry. It saves you time by eliminating the need to manually check multiple event websites and provides a centralized database of relevant events. Overview This workflow automatically scrapes websites for upcoming events in your area or industry and compiles them into a structured format. It uses Bright Data to access event listing websites and extract event details like dates, locations, and descriptions. Tools Used n8n:** The automation platform that orchestrates the workflow. Bright Data:** For scraping event websites without being blocked. Calendar/Database:** For storing and organizing event information. How to Install Import the Workflow: Download the .json file and import it into your n8n instance. Configure Bright Data: Add your Bright Data credentials to the Bright Data node. Set Up Data Storage: Configure where you want to store the event data. Customize: Specify locations, event types, and date ranges to monitor. Use Cases Event Planners:** Stay updated on competing or complementary events. Community Managers:** Discover local events to share with your community. Marketing Teams:** Find industry events for networking opportunities. Connect with Me Website:** https://www.nofluff.online YouTube:** https://www.youtube.com/@YaronBeen/videos LinkedIn:** https://www.linkedin.com/in/yaronbeen/ Get Bright Data:** https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #events #eventdiscovery #brightdata #webscraping #eventfinder #localevents #eventcalendar #eventplanning #n8nworkflow #workflow #nocode #eventautomation #eventscraping #eventtracking #upcomingEvents #eventmarketing #eventmanagement #eventdatabase #communityevents #eventnotifications #eventorganizer #eventtech #eventindustry #eventcollection
by Davide
How It Works Form Submission: The workflow starts with the On form submission node, which triggers when a user submits a contact form. The form collects the user's name, email, and message. Text Classification: The Text Classifier node uses an AI model (GPT-4) to classify the submitted message into one of the predefined categories: Request Quote: For quote requests. Product info: For general product inquiries. General problem: For issues or problems related to products. Order: For questions about placed orders. Other: For any messages that don’t fit the above categories. Email Routing: Based on the classification, the workflow routes the message to the appropriate department via email: Prod. Dep.: For product-related inquiries. Quote Dep.: For quote requests. Gen. Dep.: For general problems. Order Dep.: For order-related questions. Other Dep.: For all other inquiries. Each email includes the user's name, email, message, and the classified category. Data Logging: The workflow logs the form submission and classification results into a Google Sheets document. Each department has its own sheet where the data is appended, including: User’s name, email, and message. Submission date and time. Assigned category. Email recipient details. AI Model Integration: The OpenAI node provides the AI model (GPT-4) used by the Text Classifier to classify the messages. The model is instructed to classify the text into one of the predefined categories without additional explanations. Set Up Steps Configure the Form Trigger: Set up the On form submission node to collect user inputs (name, email, and message) and trigger the workflow. Set Up the Text Classifier: Configure the Text Classifier node to use the OpenAI model (GPT-4) for text classification. Define the categories and their descriptions (e.g., "Request Quote", "Product info", etc.). Set the fallback category to "Other" for unclassifiable messages. Configure Email Sending: Set up the Email Send nodes for each department (Prod. Dep., Quote Dep., Gen. Dep., Order Dep., Other Dep.). Configure the email subject, body, and reply-to address using the form data and classification results. Ensure SMTP credentials are correctly configured for sending emails. Set Up Google Sheets Integration: Configure the Google Sheets nodes to append data to the appropriate sheets for each department. Map the form data (name, email, message, date, category, and recipient) to the corresponding columns in the Google Sheets document. Test the Workflow: Submit a test form to ensure the workflow correctly classifies the message, sends the email to the right department, and logs the data in Google Sheets. Verify that the OpenAI model is classifying messages accurately. Activate the Workflow: Once tested, activate the workflow to automate the process of handling contact form submissions. Key Features Automated Classification**: Uses AI to classify messages into relevant categories, reducing manual effort. Email Routing**: Sends emails to the appropriate department based on the classification. Data Logging**: Logs all form submissions and classification results in Google Sheets for tracking and analysis. Scalability**: Easily adaptable to additional categories or departments by modifying the workflow. This workflow is ideal for eCommerce businesses or customer support teams looking to automate and streamline the handling of contact form submissions. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Davide
How it Works This workflow automates the process of handling job applications by extracting relevant information from submitted CVs, analyzing the candidate's qualifications against a predefined profile, and storing the results in a Google Sheet. Here’s how it operates: Data Collection and Extraction: The workflow begins with a form submission (On form submission node), which triggers the extraction of data from the uploaded CV file using the Extract from File node. Two informationExtractor nodes (Qualifications and Personal Data) are used to parse specific details such as educational background, work history, skills, city, birthdate, and telephone number from the text content of the CV. Processing and Evaluation: A Merge node combines the extracted personal and qualification data into a single output. This merged data is then passed through a Summarization Chain that generates a concise summary of the candidate’s profile. An HR Expert chain evaluates the candidate against a desired profile (Profile Wanted), assigning a score and providing considerations for hiring. Finally, all collected and processed data including the evaluation results are appended to a Google Sheets document via the Google Sheets node for further review or reporting purposes [[9]]. Set Up Steps To replicate this workflow within your own n8n environment, follow these steps: Configuration: Begin by setting up an n8n instance if you haven't already; you can sign up directly on their website or self-host the application. Import the provided JSON configuration into your n8n workspace. Ensure that all necessary credentials (e.g., Google Drive, Google Sheets, OpenAI API keys) are correctly configured under the Credentials section since some nodes require external service integrations like Google APIs and OpenAI for language processing tasks. Customization: Adjust the parameters of each node according to your specific requirements. For example, modify the fields in the formTrigger node to match what kind of information you wish to collect from applicants. Customize the prompts given to AI models in nodes like Qualifications, Summarization Chain, and HR Expert so they align with the type of analyses you want performed on the candidates' profiles. Update the destination settings in the Google Sheets node to point towards your own spreadsheet where you would like the final outputs recorded. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically scrapes and summarizes the latest industry news, delivering a curated digest to your team. Stay informed without sifting through countless articles. Overview Bright Data scrapes top news sites, blogs, and press release feeds relevant to your sector. OpenAI summarizes each article and tags it by topic. The daily digest is compiled into Markdown and sent via Slack and email, while full summaries are archived in Notion. Tools Used n8n** – Automation framework Bright Data** – Scrapes news sources reliably OpenAI** – Generates concise summaries and tags Slack & Gmail** – Distributes daily digest Notion** – Stores detailed article notes How to Install Import the Workflow into n8n. Configure Bright Data credentials. Set Up OpenAI API key. Authorize Slack, Gmail, and Notion. Customize Source List & Keywords in the Set node. Use Cases Executive Briefings**: Keep leadership updated. Product Teams**: Track competitor announcements. Marketing**: Identify content trends quickly. Investors**: Monitor sector developments. Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #industrynews #webscraping #brightdata #openai #newsdigest #n8nworkflow #nocode
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically tracks brand mentions across various online platforms by scraping blog posts and articles for specific brand references. It saves you time by eliminating the need to manually search for brand mentions and provides sentiment analysis on how your brand is being discussed online. Overview This workflow automatically scrapes Medium blog posts and other online content to find mentions of specific brands (like OpenAI) and performs sentiment analysis on the content. It uses Bright Data to access content without restrictions and AI to intelligently extract brand-related information, analyze sentiment, and summarize key points about brand coverage. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping blog posts and articles without being blocked OpenAI**: AI agent for intelligent content analysis and sentiment extraction Google Sheets**: For storing brand mention data and sentiment analysis results How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your brand monitoring spreadsheet Customize: Define target URLs and brand keywords to monitor Use Cases Brand Monitoring**: Track how your brand is mentioned and discussed online Public Relations**: Monitor media coverage and public sentiment about your brand Competitive Intelligence**: Track mentions of competitor brands and market perception Crisis Management**: Quickly identify negative brand mentions for rapid response Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #brandmonitoring #sentimentanalysis #brightdata #webscraping #brandmentions #n8nworkflow #workflow #nocode #mediamonitoring #brandtracking #publicrelations #brandanalytics #onlinemonitoring #contentanalysis #brandsentiment #digitalmonitoring #brandresearch #mediaanalysis #brandinsights #reputationmanagement #brandwatch #socialmediamonitoring #contentmonitoring #brandpresence #digitalpr #brandlistening #mediatracking #onlinereputation
by Abdullahi Ahmed
Title RAG AI Agent for Documents in Google Drive → Pinecone → OpenAI Chat (n8n workflow) Short Description This n8n workflow implements a Retrieval-Augmented Generation (RAG) pipeline + AI agent, allowing users to drop documents into a Google Drive folder and then ask questions about them via a chatbot. New files are indexed automatically to a Pinecone vector store using OpenAI embeddings; the AI agent loads relevant chunks at query time and answers using context plus memory. Why this workflow matters / what problem it solves Large language models (LLMs) are powerful, but they lack up-to-date, domain-specific knowledge. RAG augments the LLM with relevant external documents, reducing hallucination and enabling precise answers. (Pinecone) This workflow automates the ingestion, embedding, storage, retrieval, and chat logic — with minimal manual work. It’s modular: you can swap data sources, vector DBs, or LLMs (with some adjustments). It leverages the built-in AI Agent node in n8n to tie all the parts together. (n8n) How to get the required credentials | Service | Purpose in Workflow | Setup Link | What you need / steps | | ------------------------- | ------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | | Google Drive (OAuth2) | Trigger new file events & download the file | https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/ | Create a Google Cloud OAuth app, grant it Drive scopes, get client ID & secret, configure redirect URI, paste into n8n credentials. | | Pinecone | Vector database for embeddings | https://docs.n8n.io/integrations/builtin/credentials/pinecone/ | Sign up at Pinecone, in dashboard create an index, get API key + environment, paste into n8n credential. | | OpenAI | Embeddings + chat model | https://docs.n8n.io/integrations/builtin/credentials/openai/ | Log in to OpenAI, generate a secret API key, paste into n8n credentials. | You’ll configure these under n8n → Credentials → New Credential, matching credential names referenced in your workflow nodes. Detailed Walkthrough: How the Workflow Works Here’s a step-by-step of what happens inside your workflow (matching your JSON): 1. Google Drive Trigger Watches a specified folder in Google Drive. Whenever a new file appears (fileCreated event), the workflow is triggered (polling every minute). You must set the folder ID (in “folderToWatch”) to the Drive folder you want to monitor. 2. Download File Takes the file ID from the trigger and downloads the file content (binary). 3. Indexing Path: Embeddings + Storage (This path only runs when new files arrive) The file is sent to the Default Data Loader node (via the Recursive Character Text Splitter) to break it into chunks with overlap (so context is preserved). Each chunk is fed into Embeddings OpenAI to convert text into embedding vectors. Then Pinecone Vector Store (insert mode) ingests the vector + text metadata into your Pinecone index. This ensures your vector store stays up-to-date with files you drop into Drive. 4. Chat / Query Path (Triggered by user chat via webhook) When a chat message arrives via When Chat Message Received, it gets passed into the AI Agent node. Before generation, the AI Agent calls the Pinecone Vector Store1 set in “retrieve-as-tool” mode, which runs a vector-based retrieval using the user query embedding. The relevant text chunks are pulled as tools/context. The OpenAI Chat Model node is linked as the language model for the agent. Simple Memory** node provides conversational memory (keeping history across messages). The agent combines retrieved context + memory + user input and instructs the model to produce a response. 5. Connections / Flow Logic The Embeddings OpenAI node’s output is wired into Pinecone Vector Store (insert) and also into Pinecone Vector Store1 (so the same embeddings can be used for retrieval). The AI Agent has tool access to Pinecone retrieval and memory. The Download File node triggers the insert path. The When chat message triggers the agent path. Similar Workflows / Inspirations & Comparisons To help understand how your workflow fits into what’s already out there, here are a few analogues: n8n Blog: “Build a custom knowledge RAG chatbot”** — they show a workflow that ingests documents from external sources, indexes them in Pinecone, and responds to queries via n8n + LLM. (n8n Blog) Index Documents from Google Drive to Pinecone** — this is nearly identical for the ingestion part: trigger on Drive, split, embed, upload. (n8n) Build & Query RAG System with Google Drive, OpenAI, Pinecone** — shows the full RAG + chat logic, same pattern. (n8n) Chat with GitHub API Documentation (RAG)** — demonstrates converting API spec into chunks, embedding, retrieving, and chatting. (n8n) Community tutorials & forums** talk about using the AI Agent node with tools like Pinecone, and how the RAG part is often built as a sub-workflow feeding an agent. (n8n Community) What sets your workflow apart is your explicit combination: Google Drive → automatic ingestion → chat agent with tool integration + memory. Many templates show either ingestion or chat, but fewer show them combined cleanly with n8n’s AI Agent. Suggested Published Description (you can paste/adjust) > RAG AI Agent for Google Drive Documents (n8n workflow) > > This workflow turns a Google Drive folder into a live, queryable knowledge base. Drop PDF, docx, or text files into the folder → new documents are automatically indexed into a Pinecone vector store using OpenAI embeddings → you can ask questions via a webhook chat interface and the AI agent will retrieve relevant text, combine it with memory, and answer in context. > > Credentials needed > > * Google Drive OAuth2 (see: https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/) > * Pinecone (see: https://docs.n8n.io/integrations/builtin/credentials/pinecone/) > * OpenAI (see: https://docs.n8n.io/integrations/builtin/credentials/openai/) > > How it works > > 1. Drive trigger picks up new files > 2. Download, split, embed, insert into Pinecone > 3. Chat webhook triggers AI Agent > 4. Agent retrieves relevant chunks + memory > 5. Agent uses OpenAI model to craft answer > > This is built on the core RAG pattern (ingest → retrieve → generate) and enhanced by n8n’s AI Agent node for clean tool integration. > > Inspiration & context > This approach follows best practices from existing n8n RAG tutorials and templates, such as the “Index Documents from Google Drive to Pinecone” ingestion workflow and “Build & Query RAG System” templates. (n8n) > > You're free to swap out the data source (e.g. Dropbox, S3) or vector DB (e.g. Qdrant) as long as you adjust the relevant nodes. If you like, I can generate a polished Markdown README for you (with badges, diagrams, instructions) ready for GitHub/n8n community publishing. Do you want me to build that? [1]: https://www.pinecone.io/learn/retrieval-augmented-generation/?utm_source=chatgpt.com "Retrieval-Augmented Generation (RAG) - Pinecone" [2]: https://n8n.io/integrations/agent/?utm_source=chatgpt.com "AI Agent integrations | Workflow automation with n8n" [3]: https://blog.n8n.io/rag-chatbot/?utm_source=chatgpt.com "Build a Custom Knowledge RAG Chatbot using n8n" [4]: https://n8n.io/workflows/4552-index-documents-from-google-drive-to-pinecone-with-openai-embeddings-for-rag/?utm_source=chatgpt.com "Index Documents from Google Drive to Pinecone with OpenAI ... - N8N" [5]: https://n8n.io/workflows/4501-build-and-query-rag-system-with-google-drive-openai-gpt-4o-mini-and-pinecone/?utm_source=chatgpt.com "Build & Query RAG System with Google Drive, OpenAI GPT-4o-mini ..." [6]: https://n8n.io/workflows/2705-chat-with-github-api-documentation-rag-powered-chatbot-with-pinecone-and-openai/?utm_source=chatgpt.com "Chat with GitHub API Documentation: RAG-Powered Chatbot ... - N8N"
by Sirisak Chantanate
Workflow Overview: Extract text from image using AI is worth because you need no code. It incorporates Google Gemini 2.0 Flash model for important text extraction from image. If you code without AI, you have to use multiple condition and may cause a lot of bug but with Google Gemini, you don't need any coding and if the Pay Slip is different, Gemini will extract it automatically. Workflow description: User uses Line Messaging API to send Pay Slip image or message to the chatbot, create Line Business ID from here: Line Business Classify the message which is image or text If the message is Pay Slip image, it will process using Gemini 2.0 Flash EXP and extract important information and response in JSON format without coding by using the following prompt: Analyze image and then return in JSON Response that has the only following value: Status, From, To, Date, Amount To get Google AI Studio API Key, you can find from the following link: Google AI Studio API Key Create Google Sheets which include the fileds (Status, From, To, Date, Amount) that we have created related to the AI prompt Google Sheets as the following example: If the message is text, it will process using Gemini 2.0 Flash EXP model as the AI Assistant else if the message is image, it will extract the important fields then reply to the User and insert into Google Sheets Key Features: Extract text from image with No Code** Without N8N, we have to write code to extract text from image, but with N8N and Google Gemini 2.0 Flash EXP together, we don't need to code and it will process all slip vendors or other document vendors. Multipurpose Chatbot** this chatbot accept both text and image so we don't have to create many chatbot accounts Reduce human error** this workflow let any officer to verify document status when the job ends Note: You can change the information by changing your prompt and also Google Sheets Column names relatively.
by Udit Rawat
This workflow is for automating and centralizing your bookmarking process using AI-powered tagging and seamless integration between your Android device and a self-hosted Read Deck platform (https://readeck.org/en/). This workflow eliminates manual entry, organizes links with smart AI-generated tags, and ensures your bookmarks are always accessible, searchable, and secure. How It Works 📱 Android Shortcut Integration Use the HTTP Shortcuts app to create a 1-tap trigger that sends URLs and titles from your Android phone directly to n8n. 🤖 AI-Powered Tagging & Processing Leverage ChatGPT-4 to analyze content context and auto-generate relevant tags (e.g., “Tech Tutorials,” “Productivity Tools”). Extract clean titles and URLs from messy shared data (even from apps like Twitter or Reddit). 🔗 Readeck Integration Automatically save processed bookmarks to your self-hosted Readeck-like platform with structured metadata (title, URL, tags). ⚡ Silent Automation It runs in the background—no pop-ups or interruptions. 🔒 Pro Security Optional authentication (API tokens, headers) to protect your data. Use Case Perfect for researchers, content creators, or anyone drowning in tabs who wants to: Save articles, videos, or social posts in one click. Organize bookmarks with AI-generated tags. Build a personal knowledge base that’s always accessible. Tutorial 1️⃣ Set Up Android Shortcut Install "HTTP Shortcuts" and configure it to send data to your n8n webhook. Enable “Share Menu” to trigger bookmarks from any app. 2️⃣ Configure n8n Workflow Import the template and add your Read Deck API token (or similar service). 3️⃣ Test & Scale Share a link from your phone—watch it appear in Read Deck instantly! Add error handling or notifications for advanced use. Note: For self-hosted platforms, ensure your instance is publicly accessible (or use a VPN). Why Choose This Workflow? Zero Manual Entry: Save hours of copying/pasting. AI Organization: Say goodbye to chaotic bookmark folders. Privacy First: Host your data on your terms. Transform your bookmarking chaos into a streamlined system—try “Save: Bookmark” today! 🚀
by AI Incarnation
This n8n template empowers IT support teams by automating document ingestion and instant query resolution through a conversational AI. It integrates Google Drive, Pinecone, and a Chat AI agent (using Google Gemini/OpenRouter) to transform static support documents into an interactive, searchable knowledge base. With two interlinked workflows—one for processing support documents and one for handling chat queries—employees receive fast, context-aware answers directly from your support documentation. Overview Document Ingestion Workflow Google Drive Trigger:** Monitors a specified folder for new file uploads (e.g., updated support documents). File Download & Extraction:** Automatically downloads new files and extracts text content. Data Cleaning & Text Splitting:** Utilizes a Code node to remove line breaks, trim extra spaces, and strip special characters, while a text splitter segments the content into manageable chunks. Embedding & Storage:** Generates text embeddings using Google Gemini and stores them in a Pinecone vector store for rapid similarity search. Chat Query Workflow Chat Trigger:** Initiates when an employee sends a support query. Vector Search & Context Retrieval:** Retrieves the top relevant document segments from Pinecone based on similarity scores. Prompt Construction:** A Code node combines the retrieved document snippets with the user’s query into a detailed prompt. AI Agent Response:** The constructed prompt is sent to an AI agent (using OpenRouter Chat Model) to generate a clear, step-by-step solution. Key Benefits & Use Case Imagine a large organization where every IT support document—from troubleshooting guides to system configurations—is stored in a single Google Drive folder. When an employee encounters an issue (e.g., “How do I reset my VPN credentials?”), they simply type the query into a chat interface. Instantly, the workflow retrieves the most relevant context from the ingested documents and provides a detailed, actionable answer. This process reduces resolution times, enhances support consistency, and significantly lightens the load on IT staff. Prerequisites A valid Google Drive account with access to the designated folder. A Pinecone account for storing and retrieving text embeddings. Google Gemini* (or *OpenRouter**) credentials to power the Chat AI agent. An operational n8n instance configured with the necessary nodes and credentials. Workflow Details 1 Document Ingestion Workflow Google Drive Trigger Node:** Listens for file creation events in the specified folder. Google Drive Download Node:** Downloads the newly added file. Extract from File Node:** Extracts text content from the downloaded file. Code Node (Data Cleaning):** Cleans the extracted text by removing line breaks, trimming spaces, and eliminating special characters. Recursive Text Splitter Node:** Segments the cleaned text into manageable chunks. Pinecone Vector Store Node:** Generates embeddings (via Google Gemini) and uploads the chunks to Pinecone. 2 Chat Query Workflow Chat Trigger Node:** Receives incoming user queries. Pinecone Vector Store Node (Query):** Searches for relevant document chunks based on the query. Code Node (Context Builder):** Sorts the retrieved documents by relevance and constructs a prompt merging the context with the query. AI Agent Node:** Sends the prompt to the Chat AI agent, which returns a detailed answer. How to Use Import the Template: Import the template into your n8n instance. Configure the Google Drive Trigger: Set the folder ID (e.g., 1RQvAHIw8cQbtwI9ZvdVV0k0x6TM6H12P) and connect your Google Drive credentials. Set Up Pinecone Nodes: Enter your Pinecone index details and credentials. Configure the Chat AI Agent: Provide your Google Gemini (or OpenRouter) API credentials. Test the Workflows: Validate the document ingestion workflow by uploading a sample support document. Validate the chat query workflow by sending a test query and verifying the returned support information. Additional Notes Ensure all credentials (Google Drive, Pinecone, and Chat AI) are correctly set up and tested before deploying the workflows in production. The template is fully customizable. Adjust the text cleaning, splitting parameters, or the number of document chunks retrieved based on your support documentation's size and structure. This template not only enhances IT support efficiency but also offers a scalable solution for managing and leveraging growing volumes of support content.
by Don Jayamaha Jr
Track NFT market trends, collections, and trades in real time—directly from Telegram! This master workflow integrates the OpenSea API, GPT-4o-mini AI, and Telegram, allowing users to request natural-language NFT analytics and receive structured insights instantly. Whether you're an NFT trader, collector, or market analyst, this Telegram-native assistant brings you on-demand market intelligence—powered by OpenSea and AI. > ⚠️ Important: This workflow requires three sub-workflows to function properly. These must be downloaded and published in your n8n instance. 🧩 Required Sub-Workflows To activate this template, download and publish the following workflows: Analyze NFT Market Trends with AI-Powered OpenSea Analytics Agent Tool Get Real-time NFT Insights with OpenSea AI-Powered NFT Agent Tool Get Real-time NFT Marketplace Insights with OpenSea Marketplace Agent Tool 📌 You can also find these by visiting my Creator profile: 👉 https://n8n.io/creators/don-the-gem-dealer/ How It Works A Telegram bot receives a message (e.g., “Top sales for Azuki”). The AI router in this workflow determines which agent should process the request: Marketplace Agent → Listings, offers, and orders Analytics Agent → Sales volume, price trends, wallet behavior NFT Agent → Metadata, traits, ownership info The selected agent queries the OpenSea API using your API key. The response is processed using GPT-4o-mini, formatted, and sent back via Telegram. What You Can Do with This Agent 🔹 Discover undervalued NFTs based on trait rarity and price 🔹 Track market trends for any collection in real time 🔹 Compare collection performance by volume, sales, and listings 🔹 Analyze flipping trends and whale activity across wallets 🔹 Retrieve NFT ownership and metadata instantly 🔹 View trait-specific offers for insight into rarity-driven demand Example Queries You Can Use ✅ "What are the cheapest NFTs in the Pudgy Penguins collection?" ✅ "Get sales volume for Azuki and CloneX over the last 30 days." ✅ "Who owns Bored Ape #456?" ✅ "Show the best current offers for Moonbirds." Set Up Steps Create a Telegram Bot Use @BotFather to create your bot and get the API token. Get an OpenSea API Key Apply for your API key via the OpenSea Developer Portal. Configure n8n Credentials Add your Telegram Bot and OpenSea API Key under Credentials in n8n. Download Required Sub-Workflows Install and publish the following workflows: Analytics Agent Tool NFT Agent Tool Marketplace Agent Tool Deploy & Test Chat with your Telegram bot. Try: "Compare BAYC and Azuki volume" or "Show listings for Doodles." ✅ Final Notes > If your queries don’t respond correctly, make sure all three sub-workflows are installed and published, not just saved. 🚀 Dominate the NFT market with AI-powered OpenSea intelligence—right from your Telegram inbox!
by Don Jayamaha Jr
Get deep insights into NFT market trends, sales data, and collection statistics—all powered by AI and OpenSea! This workflow connects GPT-4o-mini, OpenSea API, and n8n automation to provide real-time analytics on NFT collections, wallet transactions, and market trends. It is ideal for NFT traders, collectors, and investors looking to make informed decisions based on structured data. How It Works Receives user queries via Telegram, webhooks, or another connected interface. Determines the correct API tool based on the request (e.g., collection stats, wallet transactions, event tracking). Retrieves data from OpenSea API (requires API key). Processes the information using an AI-powered analytics agent. Returns structured insights in an easy-to-read format for quick decision-making. What You Can Do with This Agent 🔹 Retrieve NFT Collection Stats → Get floor price, volume, sales data, and market cap. 🔹 Track Wallet Activity → Analyze transactions for a given wallet address. 🔹 Monitor NFT Market Trends → Track historical sales, listings, bids, and transfers. 🔹 Compare Collection Performance → View side-by-side market data for different NFT projects. 🔹 Analyze NFT Transaction History → Check real-time ownership changes for any NFT. 🔹 Identify Market Shifts → Detect sudden spikes in demand, price changes, and whale movements. Example Queries You Can Use ✅ "Get stats for the Bored Ape Yacht Club collection." ✅ "Show me all NFT sales from the last 24 hours." ✅ "Fetch all NFT transfers for wallet 0x123...abc on Ethereum." ✅ "Compare the last 3 months of sales volume for Azuki and CloneX." ✅ "Track the top 10 wallets making the most NFT purchases this week." Available API Tools & Endpoints 1️⃣ Get Collection Stats → /api/v2/collections/{collection_slug}/stats (Retrieve NFT collection-wide market data) 2️⃣ Get Events → /api/v2/events (Fetch global NFT sales, transfers, listings, bids, redemptions) 3️⃣ Get Events by Account → /api/v2/events/accounts/{address} (Track transactions by wallet) 4️⃣ Get Events by Collection → /api/v2/events/collection/{collection_slug} (Get sales activity for a collection) 5️⃣ Get Events by NFT → /api/v2/events/chain/{chain}/contract/{address}/nfts/{identifier} (Retrieve historical transactions for a specific NFT) Set Up Steps Get an OpenSea API Key Sign up at OpenSea API and request an API key. Configure API Credentials in n8n Add your OpenSea API key under HTTP Header Authentication. Connect the Workflow to Telegram, Slack, or Database (Optional) Use n8n integrations to send alerts to Telegram, Slack, or save results to Google Sheets, Notion, etc. Deploy and Test Send a query (e.g., "Azuki latest sales") and receive instant NFT market insights! Stay ahead in the NFT market—get real-time analytics with OpenSea’s AI-powered analytics agent!