by Vitali
Template Description This n8n workflow is designed to manage Fastmail masked email addresses using the Fastmail API. The workflow provides the following functionalities: Retrieve all masked emails: Fetches all masked email addresses associated with the Fastmail account. Create masked email: Allows creating a new masked email with a specified state (pending, enabled, etc.). Update masked email state: Updates the state of a masked email such as enabling, disabling, or deleting it. Generate HTML template: Constructs an HTML table to display the masked emails in a user-friendly format. Steps to Make it Work Webhook Node: This node listens for incoming requests to manage masked emails. Needs Basic Authentication credentials to secure the endpoint. Session Node: Sends a request to obtain session information from Fastmail's API. Requires an HTTP Header Auth credential with your Fastmail API token. Switch Node: Routes the workflow based on the state of the incoming masked email request (pending, enabled, disabled, deleted). HTTP Request Nodes: These nodes handle various Fastmail API calls for masked emails (get, set, update, delete). All HTTP Request nodes require an HTTP Header Auth credential attached, using the Fastmail API token. Set Node: Gathers the retrieved masked email list into an array for further processing. HTML Node: Generates an HTML template to render the masked email addresses in a table format. Respond to Webhook Node: Sends back the HTML table to the client in response to the webhook request. Needed Credentials Fastmail Masked E-Mail Addresses: An API token from Fastmail's API. Each HTTP call to Fastmail requires this credential for authentication. Note Ensure that you correctly configure authentication for the API calls and webhook security. Use your actual Fastmail API credentials with the correct scope. The workflow assumes that the Fastmail API is correctly configured and accessible from your n8n instance. Update URLs and credentials IDs according to your n8n configuration.
by Omar Akoudad
The workflow is well-designed for CRM analysis with a robust quality control mechanism. The dual-AI approach ensures reliable results, while the webhook integration makes it production-ready for real-time CRM data processing. Dual-AI Architecture: Uses DeepSeek Reasoner for analysis and DeepSeek Chat for verification. Flexible Input: Supports both manual testing and production webhook integration. Quality Assurance: Built-in verification system to ensure report accuracy. Comprehensive Analysis: Covers lead conversion, upsell metrics, agent ranking, and more. Professional Output: Generates structured markdown reports with actionable insights
by siyad
This n8n workflow automates the process of monitoring inventory levels for Shopify products, ensuring timely updates and efficient stock management. It is designed to alert users when inventory levels are low or out of stock, integrating with Shopify's webhook system and providing notifications through Discord (can be changed to any messaging platform) with product images and details. Workflow Overview Webhook Node (Shopify Listener): This node is set up to listen for Shopify's inventory level webhook. It triggers the workflow whenever there is an update in the inventory levels. The webhook is configured in Shopify settings, where the n8n URL is specified to receive inventory level updates. Function Node (Inventory Check): This node processes the data received from the Shopify webhook. It extracts the available inventory and the inventory item ID, and determines whether the inventory is low (less than 4 items) or out of stock. Condition Nodes (Inventory Level Check): Two condition nodes follow the function node. One checks if the inventory is low (low_inventory equals true), and the other checks if the inventory is out of stock (out_of_stock equals true). GraphQL Node (Product Details Retrieval): Connected to the condition nodes, this node fetches detailed information about the product using Shopify's GraphQL API. It retrieves the product variant, title, current inventory quantity, and the first product image. HTTP Node (Discord Notification): The final node in the workflow sends a notification to Discord. It includes an embed with the product title, a warning message ("This product is running out of stock!"), the remaining inventory quantity, product variant details, and the product image. The notification ensures that relevant stakeholders are immediately informed about critical inventory levels.
by Davide
This workflow is designed to intelligently route user queries to the most suitable large language model (LLM) based on the type of request received in a chat environment. It uses structured classification and model selection to optimize both performance and cost-efficiency in AI-driven conversations. It dynamically routes requests to specialized AI models based on content type, optimizing response quality and efficiency. Benefits Smart Model Routing**: Reduces costs by using lighter models for general tasks and reserving heavier models for complex needs. Scalability**: Easily expandable by adding more request types or LLMs. Maintainability**: Clear logic separation between classification, model routing, and execution. Personalization**: Can be integrated with session IDs for per-user memory, enabling personalized conversations. Speed Optimization**: Fast models like GPT-4.1 mini or Gemini Flash are chosen for tasks where speed is a priority. How It Works Input Handling: The workflow starts with the "When chat message received" node, which triggers the process when a chat message is received. The input includes the chat message (chatInput) and a session ID (sessionId). Request Classification: The "Request Type" node uses an OpenAI model (gpt-4.1-mini) to classify the incoming request into one of four categories: general: For general queries. reasoning: For reasoning-based questions. coding: For code-related requests. google: For queries requiring Google tools. The classification is structured using the "Structured Output Parser" node, which enforces a consistent output format. Model Selection: The "Model Selector" node routes the request to one of four AI models based on the classification: Opus 4 (Claude 4 Sonnet): Used for coding requests. Gemini Thinking Pro: Used for reasoning requests. GPT 4.1 mini: Used for general requests. Perplexity: Used for search (Google-related) requests. AI Processing: The selected model processes the request via the "AI Agent" node, which includes intermediate steps for complex tasks. The "Simple Memory" node retains session context using the provided sessionId, enabling multi-turn conversations. Output: The final response is generated by the chosen model and returned to the user. Set Up Steps Configure Trigger: Ensure the "When chat message received" node is set up with the correct webhook ID to receive chat inputs. Define Classification Logic: Adjust the prompt in the "Request Type" node to refine classification accuracy. Verify the output schema in the "Structured Output Parser" node matches expected categories (general, reasoning, coding, google). Connect AI Models: Link each model node (Opus 4, Gemini Thinking Pro, GPT 4.1 mini, Perplexity) to the "Model Selector" node. Ensure credentials (API keys) for each model are correctly configured in their respective nodes. Set Up Memory: Configure the "Simple Memory" node to use the sessionId from the input for context retention. Test Workflow: Send test inputs to verify classification and model routing. Check intermediate outputs (e.g., request_type) to ensure correct model selection. Activate Workflow: Toggle the workflow to "Active" in n8n after testing. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by keisha kalra
Try It Out! This n8n template helps you create SEO-optimized Blog Posts for your businesses website or for personal use. Whether you're managing a business or helping local restaurants improve their digital presence, this workflow helps you build SEO-Optimized Blog Posts in seconds using Google Autocomplete and People Also Ask (SerpAPI). Who Is It For? This is helpful for people looking to SEO Optimize either another person's website or their own. How It Works? You start with a list of blog inspirations in Google Sheets (e.g., “Best Photo Session Spots”). The workflow only processes rows where the “Status” column is not marked as “done”, though you can remove this condition if you’d like to process all rows each time. The workflow pulls Google Autocomplete suggestions and PAA questions using: A custom-built SEO API I deployed via Render (for Google Autocomplete + PAA), SerpAPI (for additional PAA data). These search insights are merged. For example, if your blog idea is “Photo Session Spots,” the workflow gathers related Google search phrases and questions users are asking. Then, GPT-4 is used to draft a full blog post based on this data. The finished post is saved back into your Google Sheet. How To Use Fill out the “Blog Inspiration” column in your Google Sheet with the topics you want to write about. Update the OpenAI prompt in the ChatGPT node to match your tone or writing style. (Tip: Add a system prompt with context about your business or audience.) You can trigger this manually, or replace it with a cron schedule, webhook, or other event. Requirements A SerpAPI account to get PAA An OpenAI account for ChatGPT Access to Google Sheets and n8n How To Set-Up? Your Google Sheet should have three columns: "Blog Inspiration", "Status" → set this to “done” when a post has been generated, "Blog Draft" → this is automatically filled by the workflow. To use the SerpAPI HTTP Request node: 1. Drag in an HTTP Request node, 2. Set the Method and URL depending on how you're using SerpAPI: Use POST to run the actor live on each request. Use GET to fetch from a static dataset (cheaper if reusing the same data). 3. Add query parameters for your SerpAPI key and input values. 4. Test the node. Refer to this n8n documentation for more help! https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolserpapi/. The “Autocomplete” node connects to a custom web service I built and deployed using Render. I wrote a Python script (hosted on GitHub) that pulls live Google Autocomplete suggestions and PAA questions based on any topic you send. This script was turned into an API and deployed as a public web service via Render. Anyone can use it by sending a POST request to: https://seo-api2.onrender.com/get-seo-data (the URL is already in the node). Since this is hosted on Render’s free tier, if the service hasn’t been used in the past ~15 minutes, it may “go to sleep.” When that happens, the first request can take 10–30 seconds to respond while it wakes up. Happy Automating!
by Tom
This workflow automatically deletes user data from different apps/services when a specific slash command is issued in Slack. Watch this talk and demo to learn more about this use case. The demo uses Slack, but Mattermost is Slack-compatible, so you can also connect Mattermost in this workflow. Prerequisites Accounts and credentials for the apps/services you want to use. Some basic knowledge of JavaScript. Nodes Webhook node triggers the workflow when a Slack slash command is issued. IF nodes confirm Slack's verification token and verify that the data has the expected format. Set node simplifies the payload. Switch node chooses the correct path for the operation to perform. Respond to Webhook nodes send responses back to Slack. Execute Workflow nodes call sub-workflows tailored to deleting data from each individual service. Function node, Crypto node, and Airtable node generate and store a log entry containing a hash value. HTTP Request node sends the final response back to Slack.
by lin@davoy.tech
Are you looking to create a counseling chatbot that provides emotional support and mental health guidance through the LINE messaging platform ? This guide will walk you through connecting LINE with powerful AI language models like GPT-4 to build a chatbot that supports users in navigating their emotions, offering 24/7 conversational therapy and accessible mental health resources . By leveraging LINE's webhook integration and Azure OpenAI , this template allows you to design a chatbot that is both empathetic and efficient, ensuring users receive timely and professional responses. Whether you're a developer, counselor, or business owner, this guide will help you create a customizable counseling chatbot tailored to your audience's needs. Who Is This Template For? Developers who want to integrate AI-powered chatbots into the LINE platform for mental health applications. Counselors & Therapists looking to expand their reach and provide automated emotional support to clients outside of traditional sessions. Businesses & Organizations focused on improving mental health accessibility and offering innovative solutions to their users. Educators & Nonprofits seeking tools to provide free or low-cost counseling services to underserved communities. How this work? Line Webhook to receive new message Send loading animation in Line Check if the input is text or not Send the text as prompt in chat model (GPT 4o) Reply the message to user (you'll need 'edit field' to format it before reply) Pre-Requisites You have access to the LINE Developers Console. An Azure OpenAI account with necessary credentials. Set-up To receive messages from LINE, configure your webhook: Set up a webhook in LINE Developer Console. Copy the Webhook URL from the Line Chatbot node and paste it into the LINE Console. Ensure to remove any 'test' part when moving to production. The loading animation reassures users that the system is processing their request. Authorize using header authorization Message Handling Use the Check Message Type IsText? node to verify if the incoming message is text. If the message type is text, proceed with ChatGPT processing; otherwise, send a reply indicating non-text inputs are not supported. AI Agent Configuration Define the system message within the AI Agent node to guide the conversation based on desired interaction principles. Connect the Azure OpenAI Chat Model to the AI Agent. Formatting Responses Ensure responses are properly formatted before sending them back to the user. Reply Message Use the ReplyMessage - Line node to send the formatted response. Ensure proper header authorization using Bearer tokens.
by Eumentis
What It Does This workflow automatically runs when a new email is received in the user's Gmail account. It sends the email content to OpenAI (GPT-4.1-mini), which intelligently determines whether the message requires action. If the email is identified as actionable, the workflow sends a structured alert message to the user in Microsoft Teams. This keeps the user informed of high-priority emails in real time without the need to manually check every message. The workflow does not log any execution data, ensuring that email content remains secure and unreadable by others. How It Works Trigger on New Email**: The workflow is triggered automatically when a new email is received in the user's Gmail account. Email Evaluation with OpenAI**: The email content is sent to GPT-4.1-MINI, which evaluates whether the message requires user action. Filter Actionable Emails**: Only emails identified as actionable by the AI are allowed to proceed through the rest of the workflow. Send Notification to Teams**: For actionable emails, the workflow sends a structured alert message to the user in Microsoft Teams chat via a Power Automate webhook. Prerequisites Gmail IMAP Credentials OpenAI API Key Microsoft Teams Webhook URL Power Automate Flow to send message to Teams chat How to Set It Up 1. Set Up Power Automate Workflow 1.1 Open Workflow Power Automate in Microsoft Teams Open the Workflow app from Microsoft Teams. If it's not already added, go to Apps → search "Workflow" → click Add → open it. 1.2 Create a New Flow Click New Flow → select Create from blank. 1.3 Add a Trigger: When a Teams webhook request is received In the trigger setup, set Who can trigger the flow? to Anyone. After saving the flow, a webhook URL will be generated — this URL will be used in n8n workflow. 1.4 Add Action: Parse JSON Set Content to: Body Use the following schema: { "type": "object", "properties": { "from": { "type": "string" }, "receivedAt": { "type": "string" }, "subject": { "type": "string" }, "message": { "type": "string" } } } 1.5 Add Action: Get an @mention token for a user Set the User field to the Microsoft Teams email address of the person to notify (e.g. yourname@domain.com). 1.6 Add Action: Post message in a chat or channel In this action, configure the following: Post as: Flow bot Post in: Chat with Flow bot Recipient: Your Microsoft Teams email address (e.g., yourname@domain.com) Paste the following code into the Message (in code view): Hello @{outputs('Get_an_@mention_token_for_a_user')?['body/atMention']}, You have received a new email at your email address @{body('Parse_JSON')?['recipientEmail']} that requires your attention: From: @{body('Parse_JSON')?['sender']} Received On: @{body('Parse_JSON')?['date']} Subject: @{body('Parse_JSON')?['subject']} Please review the message at your earliest convenience. Click here to search this mail in your mailbox 1.7 Save and Enable the Flow Click Save. Turn the flow On. The webhook URL is now active and available in the first trigger step, copy it to use in n8n. Need help with the setup? Feel free to contact us 2. Configure IMAP Email Trigger First, enable 2‑Step Verification in your Google Account and generate an App Password for n8n. Then, in the IMAP node → Create Credential to connect using the following details: • User: your Gmail address • Password: the App Password • Host: imap.gmail.com • Port: 993 • SSL/TLS: Enabled Follow the n8n documentation to complete the setup. 3. Configure OpenAI Integration Add your OpenAI API key as a credential in n8n. Follow the n8n documentation to complete the setup. 4. Set Up HTTP Request to Trigger Power Automate Workflow Paste generated Webhook URL from the Power Automate workflow into the URL field of the HTTP Request node. 5. Disable Execution Logging for Privacy To ensure that email content is not stored in logs and remains fully secure, you can disable execution logging in n8n: In the n8n Workflow Editor, click on the three dots (•••) in the top right corner and select Settings. In the settings panel: Set Save manual executions to: Do not save Set Save successful production executions to: Do not save Set Save failed production executions to: Do not save if you also want to avoid logging errors Save the changes. Refer to the official n8n documentation for more details: 6. Activate the Workflow Set the workflow status to Active in n8n so it runs automatically when a new mail is received in Gmail. Need Help? Contact us for support and custom workflow development.
by Charles
Modern AI systems are powerful but pose privacy risks when handling sensitive data. Organizations need AI capabilities while ensuring: ✅ Sensitive data never leaves secure environments ✅ Compliance with regulations (GDPR, HIPAA, PCI, SOX) ✅ Real-time decision making about data sensitivity ✅ Comprehensive audit trails for regulatory review The Concept: Intelligent Data Classification + Smart Routing The goal of this concept is to build the foundations of the safe and compliant use of LLMs in Agentic workflows by automatically detecting sensitive data, applying sanitization rules, and intelligently routing requests through secure processing channels. This workflow will analyze the user's chat or webhook input and attempt to detect PII using the Enhanced PII Pattern Detector. If detected, the workflow will process that input via a series of Compliance, Auditing, and Security steps which log and sanitizes the request prior to any LLM being pinged. Why Multi-Tier Routing? Traditional systems use binary decisions (sensitive/not sensitive). Our 3-tier approach provides: ✅ Granular Security: Critical PII gets maximum protection ✅ Performance Optimization: Clean data gets full cloud capabilities ✅ Cost Efficiency: Expensive local processing only when needed ✅ User Experience: Maintains conversational flow across security levels Why Context-Aware Detection? Regex patterns alone miss contextual sensitivity. Our approach: ✅ Catches Intent: "Bank account" discussion is sensitive even without account numbers ✅ Reduces False Negatives: Medical discussions stay secure even without explicit medical IDs ✅ Proactive Protection: Identifies sensitive contexts before PII is shared ✅ Compliance Alignment: Matches how regulations actually define sensitive data Why Risk Scoring vs Binary Classification? Binary PII detection creates artificial boundaries. Risk scoring provides: ✅ Nuanced Decisions: Multiple low-risk patterns might aggregate to high risk ✅ Adaptive Thresholds: Organizations can adjust sensitivity based on their needs ✅ Better UX: Users aren't unnecessarily restricted for low-risk scenarios ✅ Audit Transparency: Clear reasoning for every routing decision Why Comprehensive Monitoring? Privacy systems require trust and verification: ✅ Compliance Proof: Audit trails demonstrate regulatory compliance ✅ Performance Optimization: Identify bottlenecks and improve efficiency ✅ Security Validation: Ensure no sensitive data leakage occurs ✅ Operational Insights: Understand usage patterns and system health How to Install: All that you will need for this workflow are credentials for your LLM providers such as Ollama, OpenRouter, OpenAI, Anthropic, etc. This workflow is customizable and allows the user to define the best LLM and storage/memory solutions for their specific use case.
by Abdullahi Ahmed
Title RAG AI Agent for Documents in Google Drive → Pinecone → OpenAI Chat (n8n workflow) Short Description This n8n workflow implements a Retrieval-Augmented Generation (RAG) pipeline + AI agent, allowing users to drop documents into a Google Drive folder and then ask questions about them via a chatbot. New files are indexed automatically to a Pinecone vector store using OpenAI embeddings; the AI agent loads relevant chunks at query time and answers using context plus memory. Why this workflow matters / what problem it solves Large language models (LLMs) are powerful, but they lack up-to-date, domain-specific knowledge. RAG augments the LLM with relevant external documents, reducing hallucination and enabling precise answers. (Pinecone) This workflow automates the ingestion, embedding, storage, retrieval, and chat logic — with minimal manual work. It’s modular: you can swap data sources, vector DBs, or LLMs (with some adjustments). It leverages the built-in AI Agent node in n8n to tie all the parts together. (n8n) How to get the required credentials | Service | Purpose in Workflow | Setup Link | What you need / steps | | ------------------------- | ------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | | Google Drive (OAuth2) | Trigger new file events & download the file | https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/ | Create a Google Cloud OAuth app, grant it Drive scopes, get client ID & secret, configure redirect URI, paste into n8n credentials. | | Pinecone | Vector database for embeddings | https://docs.n8n.io/integrations/builtin/credentials/pinecone/ | Sign up at Pinecone, in dashboard create an index, get API key + environment, paste into n8n credential. | | OpenAI | Embeddings + chat model | https://docs.n8n.io/integrations/builtin/credentials/openai/ | Log in to OpenAI, generate a secret API key, paste into n8n credentials. | You’ll configure these under n8n → Credentials → New Credential, matching credential names referenced in your workflow nodes. Detailed Walkthrough: How the Workflow Works Here’s a step-by-step of what happens inside your workflow (matching your JSON): 1. Google Drive Trigger Watches a specified folder in Google Drive. Whenever a new file appears (fileCreated event), the workflow is triggered (polling every minute). You must set the folder ID (in “folderToWatch”) to the Drive folder you want to monitor. 2. Download File Takes the file ID from the trigger and downloads the file content (binary). 3. Indexing Path: Embeddings + Storage (This path only runs when new files arrive) The file is sent to the Default Data Loader node (via the Recursive Character Text Splitter) to break it into chunks with overlap (so context is preserved). Each chunk is fed into Embeddings OpenAI to convert text into embedding vectors. Then Pinecone Vector Store (insert mode) ingests the vector + text metadata into your Pinecone index. This ensures your vector store stays up-to-date with files you drop into Drive. 4. Chat / Query Path (Triggered by user chat via webhook) When a chat message arrives via When Chat Message Received, it gets passed into the AI Agent node. Before generation, the AI Agent calls the Pinecone Vector Store1 set in “retrieve-as-tool” mode, which runs a vector-based retrieval using the user query embedding. The relevant text chunks are pulled as tools/context. The OpenAI Chat Model node is linked as the language model for the agent. Simple Memory** node provides conversational memory (keeping history across messages). The agent combines retrieved context + memory + user input and instructs the model to produce a response. 5. Connections / Flow Logic The Embeddings OpenAI node’s output is wired into Pinecone Vector Store (insert) and also into Pinecone Vector Store1 (so the same embeddings can be used for retrieval). The AI Agent has tool access to Pinecone retrieval and memory. The Download File node triggers the insert path. The When chat message triggers the agent path. Similar Workflows / Inspirations & Comparisons To help understand how your workflow fits into what’s already out there, here are a few analogues: n8n Blog: “Build a custom knowledge RAG chatbot”** — they show a workflow that ingests documents from external sources, indexes them in Pinecone, and responds to queries via n8n + LLM. (n8n Blog) Index Documents from Google Drive to Pinecone** — this is nearly identical for the ingestion part: trigger on Drive, split, embed, upload. (n8n) Build & Query RAG System with Google Drive, OpenAI, Pinecone** — shows the full RAG + chat logic, same pattern. (n8n) Chat with GitHub API Documentation (RAG)** — demonstrates converting API spec into chunks, embedding, retrieving, and chatting. (n8n) Community tutorials & forums** talk about using the AI Agent node with tools like Pinecone, and how the RAG part is often built as a sub-workflow feeding an agent. (n8n Community) What sets your workflow apart is your explicit combination: Google Drive → automatic ingestion → chat agent with tool integration + memory. Many templates show either ingestion or chat, but fewer show them combined cleanly with n8n’s AI Agent. Suggested Published Description (you can paste/adjust) > RAG AI Agent for Google Drive Documents (n8n workflow) > > This workflow turns a Google Drive folder into a live, queryable knowledge base. Drop PDF, docx, or text files into the folder → new documents are automatically indexed into a Pinecone vector store using OpenAI embeddings → you can ask questions via a webhook chat interface and the AI agent will retrieve relevant text, combine it with memory, and answer in context. > > Credentials needed > > * Google Drive OAuth2 (see: https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/) > * Pinecone (see: https://docs.n8n.io/integrations/builtin/credentials/pinecone/) > * OpenAI (see: https://docs.n8n.io/integrations/builtin/credentials/openai/) > > How it works > > 1. Drive trigger picks up new files > 2. Download, split, embed, insert into Pinecone > 3. Chat webhook triggers AI Agent > 4. Agent retrieves relevant chunks + memory > 5. Agent uses OpenAI model to craft answer > > This is built on the core RAG pattern (ingest → retrieve → generate) and enhanced by n8n’s AI Agent node for clean tool integration. > > Inspiration & context > This approach follows best practices from existing n8n RAG tutorials and templates, such as the “Index Documents from Google Drive to Pinecone” ingestion workflow and “Build & Query RAG System” templates. (n8n) > > You're free to swap out the data source (e.g. Dropbox, S3) or vector DB (e.g. Qdrant) as long as you adjust the relevant nodes. If you like, I can generate a polished Markdown README for you (with badges, diagrams, instructions) ready for GitHub/n8n community publishing. Do you want me to build that? [1]: https://www.pinecone.io/learn/retrieval-augmented-generation/?utm_source=chatgpt.com "Retrieval-Augmented Generation (RAG) - Pinecone" [2]: https://n8n.io/integrations/agent/?utm_source=chatgpt.com "AI Agent integrations | Workflow automation with n8n" [3]: https://blog.n8n.io/rag-chatbot/?utm_source=chatgpt.com "Build a Custom Knowledge RAG Chatbot using n8n" [4]: https://n8n.io/workflows/4552-index-documents-from-google-drive-to-pinecone-with-openai-embeddings-for-rag/?utm_source=chatgpt.com "Index Documents from Google Drive to Pinecone with OpenAI ... - N8N" [5]: https://n8n.io/workflows/4501-build-and-query-rag-system-with-google-drive-openai-gpt-4o-mini-and-pinecone/?utm_source=chatgpt.com "Build & Query RAG System with Google Drive, OpenAI GPT-4o-mini ..." [6]: https://n8n.io/workflows/2705-chat-with-github-api-documentation-rag-powered-chatbot-with-pinecone-and-openai/?utm_source=chatgpt.com "Chat with GitHub API Documentation: RAG-Powered Chatbot ... - N8N"
by Akhil Varma Gadiraju
📬 N8N Contact Form Workflow: Capture, Notify via Email, and Redirect with Confirmation/Error Handling This n8n workflow facilitates contact form submissions through a customizable form that sends an email notification to support and redirects users based on the submission outcome. It is ideal for embedding a functional "Contact Us" form on websites with automated email notifications. ✨ Features Collects first name, last name, email, company name, and a message Sends formatted email notification to the support team Displays success or error confirmation to the user Customizable UI and form behavior Error fallback handling with user-friendly feedback 🧩 Nodes Overview 1. On form submission (Trigger) Type:** formTrigger Displays the contact form to users and triggers the workflow on submission. 2. Send Email to Support Type:** emailSend Sends an HTML email to a support address with the form details. Uses an SMTP credential for sending. 3. If Email Sent Type:** if Checks if the email was sent successfully using the existence of messageId. 4. Confirmation Form Type:** form Displays a “Thank You” HTML message after a successful submission. 5. Redirect Form Type:** form Redirects the user to a specified URL (e.g., LinkedIn profile). 6. Form (Error) Type:** form Displays an error message if email delivery fails. 7. NoOp Nodes End (Success)* and *End (Error)** to mark flow terminations cleanly. ⚙️ Customization Options Change the form fields, title, or descriptions in the formTrigger node. Update the email body or subject in the emailSend node. Redirect to a different URL by editing the Redirect Form node. Modify success and error UI with HTML content in the Confirmation Form and Form. 🧠 Use Cases Website "Contact Us" form integration Lead generation forms for businesses Customer service inquiry collection Feedback or support ticket system 🚀 How to Use Import this workflow into your n8n instance. Configure SMTP credentials for the emailSend node. Publish the formTrigger endpoint (e.g., /contact-us) publicly or embed in your website. Test submission and confirm email delivery and redirects. 🔐 Notes Ensure SMTP credentials are correctly configured in n8n. Make sure your n8n webhook URLs are reachable from your website or frontend. Made with ❤️ using n8n by Akhil.
by vinci-king-01
Product Price Monitor with Pushover and Baserow ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically scrapes multiple e-commerce sites for selected products, analyzes weekly pricing trends, stores historical data in Baserow, and sends an instant Pushover notification when significant price changes occur. It is ideal for retailers who need to track seasonal fluctuations and optimize inventory or pricing strategies. Pre-conditions/Requirements Prerequisites An active n8n instance (self-hosted or n8n.cloud) ScrapeGraphAI community node installed At least one publicly accessible webhook URL (for on-demand runs) A Baserow database with a table prepared for product data Pushover account and registered application Required Credentials ScrapeGraphAI API Key** – Enables web-scraping capabilities Baserow: Personal API Token** – Allows read/write access to your table Pushover: User Key & API Token** – Sends mobile/desktop push notifications (Optional) HTTP Basic Token or API Keys for any private e-commerce endpoints you plan to monitor Baserow Table Specification | Field Name | Type | Description | |------------|-----------|--------------------------| | Product ID | Number | Internal or SKU | | Name | Text | Product title | | URL | URL | Product page | | Price | Number | Current price (float) | | Currency | Single select (USD, EUR, etc.) | | Last Seen | Date/Time | Last price check | | Trend | Number | 7-day % change | How it works This workflow automatically scrapes multiple e-commerce sites for selected products, analyzes weekly pricing trends, stores historical data in Baserow, and sends an instant Pushover notification when significant price changes occur. It is ideal for retailers who need to track seasonal fluctuations and optimize inventory or pricing strategies. Key Steps: Webhook Trigger**: Manually or externally trigger the weekly price-check run. Set Node**: Define an array of product URLs and metadata. Split In Batches**: Process products one at a time to avoid rate limits. ScrapeGraphAI Node**: Extract current price, title, and availability from each URL. If Node**: Determine if price has changed > ±5 % since last entry. HTTP Request (Trend API)**: Retrieve seasonal trend scores (optional). Merge Node**: Combine scrape data with trend analysis. Baserow Nodes**: Upsert latest record and fetch historical data for comparison. Pushover Node**: Send alert when significant price movement detected. Sticky Notes**: Documentation and inline comments for maintainability. Set up steps Setup Time: 15-25 minutes Install Community Node: In n8n, go to “Settings → Community Nodes” and install ScrapeGraphAI. Create Baserow Table: Match the field structure shown above. Obtain Credentials: ScrapeGraphAI API key from your dashboard Baserow personal token (/account/settings) Pushover user key & API token Clone Workflow: Import this template into n8n. Configure Credentials in Nodes: Open each ScrapeGraphAI, Baserow, and Pushover node and select/enter the appropriate credential. Add Product URLs: Open the first Set node and replace the example array with your actual product list. Adjust Thresholds: In the If node, change the 5 value if you want a higher/lower alert threshold. Test Run: Execute the workflow manually; verify Baserow rows and the Pushover notification. Schedule: Add a Cron trigger or external scheduler to run weekly. Node Descriptions Core Workflow Nodes: Webhook** – Entry point for manual or API-based triggers. Set** – Holds the array of product URLs and meta fields. SplitInBatches** – Iterates through each product to prevent request spikes. ScrapeGraphAI** – Scrapes price, title, and currency from product pages. If** – Compares new price vs. previous price in Baserow. HTTP Request** – Calls a trend API (e.g., Google Trends) to get seasonal score. Merge** – Combines scraping results with trend data. Baserow (Upsert & Read)** – Writes fresh data and fetches historical price for comparison. Pushover** – Sends formatted push notification with price delta. StickyNote** – Documents purpose and hints within the workflow. Data Flow: Webhook → Set → SplitInBatches → ScrapeGraphAI ScrapeGraphAI → If True branch → HTTP Request → Merge → Baserow Upsert → Pushover False branch → Baserow Upsert Customization Examples Change Notification Channel to Slack // Replace the Pushover node with Slack { "channel": "#pricing-alerts", "text": 🚨 ${$json["Name"]} changed by ${$json["delta"]}% – now ${$json["Price"]} ${$json["Currency"]} } Additional Data Enrichment (Stock Status) // Add to ScrapeGraphAI's selector map { "stock": { "selector": ".availability span", "type": "text" } } Data Output Format The workflow outputs structured JSON data: { "ProductID": 12345, "Name": "Winter Jacket", "URL": "https://shop.example.com/winter-jacket", "Price": 79.99, "Currency": "USD", "LastSeen": "2024-11-20T10:34:18.000Z", "Trend": 12, "delta": -7.5 } Troubleshooting Common Issues Empty scrape result – Check if the product page changed its HTML structure; update CSS selectors in ScrapeGraphAI. Baserow “Row not found” errors – Ensure Product ID or another unique field is set as the primary key for upsert. Performance Tips Limit batch size to 5-10 URLs to avoid IP blocking. Use n8n’s built-in proxy settings if scraping sites with geo-restrictions. Pro Tips: Store historical JSON responses in a separate Baserow table for deeper analytics. Standardize currency symbols to avoid false change detections. Couple this workflow with an n8n Dashboard to visualize price trends in real-time.