by moosa
This workflow tracks new Shopify orders in real-time and logs them to a Google Sheet, while also sending a structured order summary to a Discord channel. Perfect for keeping your team and records updated without checking your Shopify admin manually. ✅ Features: Trigger: Listens to orders/create event via the **Shopify Trigger node Authentication: Uses **Shopify Access Token, generated via a custom/private Shopify app Google Sheets Logging**: Automatically appends order details to a sheet with the following columns: Order Number Customer Email Customer Name City Country Order Total Currency Subtotal Tax Financial Status Payment Gateway Order Date Line Item Titles Line Item Prices Order Link Discord Alerts**: Sends a clean and formatted summary to your Discord server Line Item Extraction**: Breaks down item titles and prices into readable format using code Multi-currency Compatible**: Displays currency type dynamically (not hardcoded) 🧩 Nodes Used: Shopify Trigger (Access Token) Code — extract line_item_titles and line_item_prices Google Sheets — Append row Code (JavaScript) — Format Discord message Discord — Send message 📒 Sticky Notes: 🛠️ Use your own Google Sheet link and Discord webhook 🔄 You can duplicate and adapt this for orders/updated or refunds/create events 🔐 No hardcoded API keys — credentials managed via UI 🖼️ Sample Outputs 📄 Google Sheet Entry | Order Number | Customer Email | Customer Name | City | Country | Order Total | Currency | Subtotal | Tax | Financial Status | Payment Gateway | Order Date | Line Item Titles | Line Item Prices | Order Link | |--------------|------------------|----------------|-----------|----------|--------------|----------|----------|--------|-------------------|------------------|------------------------------|----------------------------------------------------------------------------------------------------|----------------------------------|------------| | 1003 | abc123@gmail.com | test name | test city | Pakistan | 2522.77 | PKR | 2174.8 | 347.97 | paid | bogus | 2025-07-31T13:45:35-04:00 | Selling Plans Ski Wax, The Complete Snowboard, The Complete Snowboard, The Collection Snowboard: Liquid | 24.95, 699.95, 699.95, 749.95 | View Order | 💬 Discord Message Preview > Tested with Shopify's "Bogus" gateway — works without real card info in a development store.
by LEDGERS
🤖 AI Contact Creator for LEDGERS (Works with Any Trigger) ### Before using this template: #### 👉 Search for LEDGERS in the nodes list and install it from Community Nodes (required for this workflow to run).== 🔧 What This Workflow Does: This smart n8n template automatically creates contacts in LEDGERS using AI, triggered by any node (like Google Sheets, Webhook, Airtable, Forms, etc.). It’s designed for teams who maintain contact data across platforms and want to auto-parse raw data using AI and sync it to LEDGERS—without manual entry. ⚙️ Flow Overview: Trigger Node – Can be anything: Google Sheets, Webhook, API call, etc. Chat Model (Claude / GPT-4o) – Uses AI to generate structured contact data from raw inputs. Structured Output Parser – Parses AI response into clean JSON. Form Loop & Iteration – Loops through fields in the structured output. Create a Contact – Sends the data to LEDGERS via API. LEDGERS Loop & Iteration – Supports bulk contact creation if needed. Success/Failure Path – Sends email notifications via Gmail node depending on the outcome. 💡 Use Case: Automate contact creation from form submissions, CRM exports, sheet updates, webhook data, etc. Clean and structure messy data with AI before syncing to LEDGERS. Save manual hours and reduce errors in contact data entry.
by Rapiwa
Who is this for? This workflow listens for new or updated WooCommerce orders, cleans and structures the order data, processes orders in batches, and standardizes WhatsApp phone numbers. It verifies phone numbers via the Rapiwa API, sends invoice links or messages to verified numbers, and logs results into separate Google Sheets tabs for verified and unverified numbers. Throttling and looping are managed using batch processing and wait delays. What this Workflow Does Receives order events (e.g., order.updated) from WooCommerce or a similar trigger. Extracts customer, billing/shipping address, product list, and invoice link from the order payload. Processes orders/items in batches for controlled throughput. Cleans and normalizes phone numbers by removing non-digit characters. Verifies whether a phone number is registered on WhatsApp using the Rapiwa API. If verified, sends a personalized message or invoice link via Rapiwa's send-message endpoint. If not verified, logs the customer as unverified in Google Sheets. Logs every send attempt (status and validity) into Google Sheets. Uses Wait nodes and batching to avoid API rate limits. Key Features Trigger-based automation (WooCommerce trigger; adaptable to Shopify webhook). Batch processing using SplitInBatches for stable throughput. Phone number cleaning using JavaScript (waNoStr.replace(/\D/g, "")). Pre-send WhatsApp verification via Rapiwa to reduce failed sends. Conditional branching (IF node) between verified and unverified flows. Personalized message templates that include product and customer fields. Logging to Google Sheets with separate flows for verified/sent and unverified/not sent. Wait node for throttling and looping control. Requirements Running n8n instance with nodes: HTTP Request, Code, SplitInBatches, IF, Google Sheets, Wait, and a WooCommerce trigger (or equivalent). Rapiwa account and Bearer token for verify/send endpoints. Google account and Google Sheets access with OAuth2 credentials. WooCommerce store access credentials (or Shopify credentials if adapting). Incoming order payloads containing billing and line_items fields. Google Sheet format (example rows) A Google Sheet formatted like this ➤ Sample | Customer Name | Phone Number | Email Address | Address | Product Title | Product ID | Size | Quantity | Total Price | Product Image | Invoice Link | Status | Validity | | -------------- | ------------- | --------------------------------------------------------------------- | ----------- | ------------------------------------ | ---------- | ---- | -------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------- | ---------- | | Abdul Mannan| 8801322827799 | contact@spagreen.net | mirpur| T-Shirt - XL | 110 | XL | 1 | BDT 499.00 | https://your_shop_domain/Product/gg.img | https://your_shop_domain/INV/DAS | sent | verified | | Abdul Mannan| 8801322827799 | contact@spagreen.net | mirpur| T-Shirt - XL | 110 | XL | 1 | BDT 499.00 | https://your_shop_domain/Product/gg.img | https://your_shop_domain/INV/DAS | not sent | unverified | Important Notes The Code nodes assume billing and line_items exist in the incoming payload; update mappings if your source differs. The message template references products[0]; if orders contain multiple items, update logic to summarize or iterate products. Start testing with small batches to avoid accidental mass messaging and to respect Rapiwa rate limits. Useful Links Dashboard:** https://app.rapiwa.com Official Website:** https://rapiwa.com Documentation:** https://docs.rapiwa.com Support & Help WhatsApp**: Chat on WhatsApp Discord**: SpaGreen Community Facebook Group**: SpaGreen Support Website**: https://spagreen.net Developer Portfolio**: Codecanyon SpaGreen
by Pramod Kumar Rathoure
A RAG Chatbot with n8n and Pinecone Vector Database Retrieval-Augmented Generation (RAG) allows Large Language Models (LLMs) to provide context-aware answers by retrieving information from an external vector database. In this post, we’ll walk through a complete n8n workflow that builds a chatbot capable of answering company policy questions using Pinecone Vector Database and OpenAI models. Our setup has two main parts: Data Loading to RAG – documents (company policies) are ingested from Google Drive, processed, embedded, and stored in Pinecone. Data Retrieval using RAG – user queries are routed through an AI Agent that uses Pinecone to retrieve relevant information and generate precise answers. 1. Data Loading to RAG This workflow section handles document ingestion. Whenever a new policy file is uploaded to Google Drive, it is automatically processed and indexed in Pinecone. Nodes involved: Google Drive Trigger** Watches a specific folder in Google Drive. Any new or updated file triggers the workflow. Google Drive (Download)** Fetches the file (e.g., a PDF policy document) from Google Drive for processing. Recursive Character Text Splitter** Splits long documents into smaller chunks (with a defined overlap). This ensures embeddings remain context-rich and retrieval works effectively. Default Data Loader** Reads the binary document (PDF in this setup) and extracts the text. OpenAI Embeddings** Generates high-dimensional vector representations of each text chunk using OpenAI’s embedding models. Pinecone Vector Store (Insert Mode)** Stores the embeddings into a Pinecone index (n8ntest), under a chosen namespace. This step makes the policy data searchable by semantic similarity. 👉 Example flow: When HR uploads a new Work From Home Policy PDF to Google Drive, it is automatically split, embedded, and indexed in Pinecone. 2. Data Retrieval using RAG Once documents are loaded into Pinecone, the chatbot is ready to handle user queries. This section of the workflow connects the chat interface, AI Agent, and retrieval pipeline. Nodes involved: When Chat Message Received** Acts as the webhook entry point when a user sends a question to the chatbot. AI Agent** The core reasoning engine. It is configured with a system message instructing it to only use Pinecone-backed knowledge when answering. Simple Memory** Keeps track of the conversation context, so the bot can handle multi-turn queries. Vector Store QnA Tool** Queries Pinecone for the most relevant chunks related to the user’s question. In this workflow, it is configured to fetch company policy documents. Pinecone Vector Store (Query Mode)** Acts as the connection to Pinecone, fetching embeddings that best match the query. OpenAI Chat Model** Refines the retrieved chunks into a natural and concise answer. The model ensures answers remain grounded in the source material. Calculator Tool** Optional helper if the query involves numerical reasoning (e.g., leave calculations or benefit amounts). 👉 Example flow: A user asks “How many work-from-home days are allowed per month?”. The AI Agent queries Pinecone through the Vector Store QnA tool, retrieves the relevant section of the HR policy, and returns a concise answer grounded in the actual document. Wrapping Up By combining n8n automation, Pinecone for vector storage, and OpenAI for embeddings + LLM reasoning, we’ve created a self-updating RAG chatbot. Data Loading pipeline** ensures that every new company policy document uploaded to Google Drive is immediately available for semantic search. Data Retrieval pipeline** allows employees to ask natural language questions and get document-backed answers. This setup can easily be adapted for other domains — compliance manuals, tax regulations, legal contracts, or even product documentation.
by Sulieman Said
How to use the provided n8n workflow (step‑by‑step), what matters, what it’s good for, and costs per run. What this workflow does (in simple terms) 1) You write (or speak) your idea in Telegram. 2) The workflow builds two short prompts: Image prompt → generates one thumbnail via KIE.ai – Nano Banana (Gemini 2.5 Flash Image). Video prompt → starts a Veo‑3 (KIE.ai) video job using the thumbnail as init image. 3) You receive the thumbnail first, then the short video back in Telegram once rendering completes. Typical output: 1 PNG thumbnail + 1 short MP4 video (e.g., 8–12 s, 9:16). Why this is useful Rapid ideation**: Turn a quick text/voice idea into a ready‑to‑post thumbnail + matching short video. Consistent look: The video uses the thumbnail as **init image, keeping colors, objects and mood consistent. One chat = full pipeline**: Everything happens directly inside Telegram—no context switches. Agency‑ready**: Collect ideas from clients/team chats, and deliver outputs quickly. What you need before importing 1) KIE.ai account & API key Sign up/in at KIE.ai, go to Dashboard → API / Keys. Copy your KIE_API_KEY (keep it private). 2) Telegram Bot (BotFather) In Telegram, open @BotFather → command /newbot. Choose a name and a unique username (must end with bot). Copy your Bot Token (keep it private). 3) Your Telegram Chat ID (browser method) Send any message to your bot so you have a active chat Open Telegram web and the chat with the bot Find the chatid in the URL Import & minimal configuration (n8n) 1) Import the provided workflow JSON in n8n. 2) Create Credentials: Telegram API: paste your Bot Token. HTTP (KIE.ai): usually you’ll pass Authorization: Bearer {{ $env.KIE_API_KEY }} directly in the HTTP Request node headers, or make a generic HTTP credential that injects the header. 3) Replace hardcoded values in the template: Chat ID: use an Expression like {{$json.message.chat.id}} from the Telegram Trigger (prefer dynamic over hardcoded IDs). Authorization headers: never in query params—always in Headers. Content‑Type spelling: Content-Type (no typos). ` How to run it (basic flow) 1) Start the workflow (activate trigger). 2) Send a message to your bot, e.g. glass hourglass on a black mirror floor, minimal, elegant 3) The bot replies with the thumbnail (PNG), then the Veo‑3 video (MP4). If you send a voice message, the flow will download & transcribe it first, then proceed as above. Pricing (rule of thumb) Image (Nano Banana via KIE.ai):* ~ *$0.02–$0.04** per image (plan‑dependent). Video (Veo‑3 via KIE.ai):** Fast: $0.40 per 8 seconds ($0.05/s) Quality: $2.00 per 8 seconds ($0.25/s) Typical run (1 image + 8 s Fast video) ≈ $0.42–$0.44. > These are indicative values. Check your KIE.ai dashboard for the latest pricing/quotas. Why KIE.ai over the “classic” Google API? Cheaper in practice** for short video clips and image gen in this pipeline. One vendor** for both image & video (same auth, similar responses) = less integration hassle. Quick start**: Playground/tasks/status endpoints are n8n‑friendly for polling workflows. Security & reliability tips Never hardcode* API keys or Chat IDs into nodes—use *Credentials* or *Environment variables**. Add IF + error paths after each HTTP node: If status != 200 → Send friendly Telegram message (“Please try again”) + log to admin. If you use callback URLs for video completion, ensure the URL is publicly reachable (n8n Webhook URL). Otherwise, stick to polling. For rate limits, add a Wait node and limit concurrency in workflow settings. Keep aspect & duration consistent across prompt + API calls to avoid unexpected crops. Advanced: voice input (optional) The template supports voice via a Switch → Download → Transcribe (Whisper/OpenAI). Ensure your OpenAI credential is set and your n8n instance can fetch the audio file from Telegram. Example prompt patterns (keep it short & generic) Thumbnail prompt**: “Minimal, elegant, surreal [OBJECT], clean composition, 9:16” Video prompt**: “Cinematic [OBJECT]. slow camera move, elegant reflections, minimal & surreal mood, 9:16, 8–12s.” You can later replace the simple prompt builder with a dedicated LLM step or a fixed style guide for your brand. Final notes This template focuses on a solid, reliable pipeline first. You can always refine prompts later. Start with Veo‑3 Fast to keep iteration costs low; switch to Quality for final renders. Consider saving outputs (S3/Drive) and logging prompts/URLs to a sheet for audit & analytics. Questions or custom requests? 📩 suliemansaid.business@gmail.com
by Quinten Alexander
Your Personal RSS Feed of YouTube Videos! This workflow creates an RSS feed containing the most recent videos published by your favorite channels. Use it in combination with your favorite RSS reader and don't miss out on any of your favorite creators' content without all the distractions of YouTube. You can even play the video right from your RSS reader without ever having to visit YouTube itself! Who's it for This workflow is for everyone who likes to keep updated about videos from their favorite creator through their preferred RSS app. How it works The RSS client triggers the webhook of this workflow The RSS feeds from your selected channels are pulled from YouTube The resulting feeds are filtered so only the normal videos (no shorts), posted in the last week, remain For each video, the video player and the full video description are pulled from the YouTube API For each video, an RSS item is created containing this video player and the video description as the content The RSS items are cached in a Redis database to prevent pulling the same information from the YouTube API on each webhook call A full RSS feed is built and returned to the calling webhook How to set up Follow the steps in the red notes (from 1 to 4) to configure the workflow: Set the IDs of the channels you want to watch Configure your Redis credentials Configure your Google/YouTube API credentials Copy the webhook URL and paste it into your RSS reader Don't forget to activate the workflow! Only the nodes inside a red node need configuration; all other nodes are good to go. You are, however, free to change those nodes to your liking! Requirements This workflow has 2 requirements: A Redis database used to cache the RSS items (see the blue note on how to set up a Redis database yourself) Google API credentials to access the YouTube API Customizing this workflow Add any YouTube channel you want by adding their channel ID in the "Set Channels" node at the start of this workflow. If you aren't afraid of some XML RSS code, you can dive into the code blocks and change the resulting RSS feed. You can change the feed's title, description, or image. Or go all in on text processing and process the video description before it is added to the RSS items (such as removing sponsors or links to social media). You can also extend this workflow by adding RSS items from other feeds or sources.
by Jaruphat J.
LINE OCR Workflow to Extract and Save Thai Government Letters to Google Sheets This template automates the extraction of structured data from Thai government letters received via LINE or uploaded to Google Drive. It uses Mistral AI for OCR and OpenAI for information extraction, saving results to a Google Sheet. Who’s it for? Thai government agencies or teams receiving official documents via LINE or Google Drive Automation developers working with document intake and OCR Anyone needing to extract fields from Thai scanned letters and store structured info What it does This n8n workflow: Receives documents from two sources: LINE webhook (via Messaging API) Google Drive (new file trigger) Checks file type (PDF or image) Runs OCR with Mistral AI (Document or Image model) Uses OpenAI to extract key metadata such as: book_id subject recipient (to) signatory date, contact info, etc. Stores structured data in Google Sheets Replies to LINE user with extracted info or moves files into archive folders (Drive) How to Set It Up Create a Google Sheet with a tab named data and the following columns Example Google Sheet: book_id, date, subject, to, attach, detail, signed_by, signed_by_position, contact_phone, contact_email, download_url Set up required credentials: googleDriveOAuth2Api googleSheetsOAuth2Api httpHeaderAuth for LINE Messaging API openAiApi mistralCloudApi Define environment variables: LINE_CHANNEL_ACCESS_TOKEN GDRIVE_INVOICE_FOLDER_ID GSHEET_ID MISTRAL_API_KEY Deploy webhook to receive files from LINE Messaging API (Path: /line-invoice) Monitor Drive uploads using Google Drive Trigger How to Customize the Workflow Adjust the information extraction schema in the OpenAI Information Extractor node to match your document layout Add logic for different document types if you have more than one format Modify the LINE Reply message format or use Flex Message Update the Move File node if you want to archive to a different folder Requirements n8n self-hosted or cloud instance Google account with access to Drive and Sheets LINE Developer Account OpenAI API key Mistral Cloud API key Notes Community nodes used: @n8n/n8n-nodes-base.mistralAi This workflow supports both document images and PDF files File handling is done dynamically via MIME type
by Dele Odufuye
N8n OpenAI-Compatible API Endpoints Transform your n8n workflows into OpenAI-compatible API endpoints, allowing you to access multiple workflows as selectable AI models through a single integration. What This Does This workflow creates two API endpoints that mimic the OpenAI API structure: /models - Lists all n8n workflows tagged with aimodel (or any other tag of your choice) /chat/completions - Executes chat completions with your selected workflows, supporting both text and stream responses Benefits Access Multiple Workflows: Connect to all your n8n agents through one API endpoint instead of creating separate pipelines for each workflow. Universal Platform Support: Works with any application that supports OpenAI-compatible APIs, including OpenWebUI, Microsoft Teams, Zoho Cliq, and Slack. Simple Workflow Management: Add new workflows by tagging them with aimodel . No code changes needed. Streaming Support: Handles both standard responses and streaming for real-time agent interactions . How to Use Download the workflow JSON file from this repository Import it into your n8n instance Tag your workflows with aimodel to make them accessible through the API Create a new OpenAI credential in n8n and change the Base URL to point to your n8n webhook endpoints . Learn more about OpenAI Credentials Point your chat applications to your n8n webhook URL as if it were an OpenAI API endpoint Requirements n8n instance (self-hosted or cloud) Workflows you want to expose as AI models Any OpenAI-compatible chat application Documentation For detailed setup instructions and implementation guide, visit https://medium.com/@deleodufuye/how-to-create-openai-compatible-api-endpoints-for-multiple-n8n-workflows-803987f15e24. Inspiration This approach was inspired by Jimleuk’s workflow on n8n Templates.
by Rakin Jakaria
Use cases are many: Automate Gmail tasks such as sending, replying, labeling, deleting, and fetching emails — all with AI assistance. Perfect for YouTubers managing viewer emails, sales teams handling inquiries, freelancers responding to client requests, or professionals keeping their inbox organized. Good to know At time of writing, each Gemini request is billed per token. See Gemini Pricing for updated details. The workflow uses Gmail labels (e.g., youtube-viewers, sales-inquiry, meeting-request, potential-clients, collaboration-requests) for classification — make sure these exist in your Gmail account. How it works Chat Trigger**: You interact with the agent via a chat interface (webhook). AI Agent**: Gemini-powered assistant interprets your instructions (send, reply, label, delete, fetch emails). Email Actions**: Based on your request, the assistant uses Gmail tools to act on emails (Send, Reply, Label, Delete, Get Many). Contact Lookup**: If only a name is provided, the agent checks Google Sheets for the matching email address. If not found, it prompts you to add it. Memory**: A buffer memory stores chat context so the assistant can maintain continuity across multiple interactions. Labeling**: Emails can be auto-labeled for better organization (e.g., client inquiries, meeting requests). How to use Send commands like: “Reply to John’s email with a follow-up about the project.” “Label Sarah’s email as potential-client.” “Delete the latest spam email.” The Gmail Agent will handle the request instantly and keep everything logged properly. Requirements Gmail account connected with OAuth2 credentials Google Gemini API key for AI processing Google Sheets for contact management Pre-created Gmail labels for organization Customising this workflow Add new Gmail labels for your workflow (e.g., Invoices, Support Tickets). Connect to a CRM (e.g., HubSpot, Notion, or Airtable) for syncing email data. Enhance AI replies with dynamic templates stored in Google Sheets. Extend chat commands to include batch actions (e.g., “Archive all emails older than 30 days”).
by Roshan Ramani
Nano Banana AI Image Editor Transform your Telegram photos with AI-powered image processing using the revolutionary Nano Banana technology. This workflow automatically receives photos via Telegram, processes them through Google's advanced Gemini 2.5 Flash vision model, and sends back intelligently enhanced images - all powered by the innovative Nano Banana processing pipeline. Who's it for Perfect for content creators, social media managers, photographers, and anyone who wants to automatically enhance their Telegram photos with AI. Whether you're running a photo editing service, creating content for clients, or just want smarter image processing in your personal chats, the Nano Banana AI editor delivers professional-grade results. How it works The Nano Banana workflow creates an intelligent Telegram bot that processes images in real-time. When you send a photo with a caption to your bot, it automatically downloads the image, converts it to the proper format, sends it to Google's Gemini AI for analysis and enhancement, then returns the processed result. The Nano Banana engine optimizes every step for speed and quality. How to set up Create Telegram Bot: Get your bot token from @BotFather on Telegram OpenRouter Account: Sign up at openrouter.ai for free Gemini access Configure Credentials: Add your Telegram and OpenRouter API keys to n8n Update Chat ID: Replace "YOUR_CHAT_ID_HERE" with your actual Telegram chat ID Activate Webhook: Enable the Telegram trigger to start receiving messages Requirements n8n instance (cloud or self-hosted) Telegram Bot API credentials OpenRouter account (free tier available) Basic understanding of webhook configuration How to customize the workflow The Nano Banana editor is highly customizable: Change AI Model:** Modify the model parameter in "Nano Banana Image Processor" node Add Filters:** Insert additional processing nodes before the AI analysis Custom Prompts:** Edit the text content sent to Gemini for different processing styles Multiple Chats:** Duplicate the final node for different Telegram destinations Error Handling:** Add conditional logic for failed processing attempts Batch Processing:** Extend to handle multiple images simultaneously The Nano Banana technology ensures optimal performance while maintaining flexibility for your specific use cases.
by Intuz
This n8n template from Intuz provides a complete solution to automate your order creation process. It seamlessly syncs order data from an Airtable base directly to your Shopify store, creates the official order, and automatically sends a beautiful confirmation email to the customer, closing the loop by updating the status in Airtable. Who's this workflow for? E-commerce Managers Operations Teams Businesses with Custom Order Processes (e.g., B2B, phone orders, quotes) Shopify Store Owners using Airtable as a CRM How it works 1. Triggered from Airtable: The workflow starts instantly when an Airtable Automation sends a signal via a webhook. This happens when you mark an order as ready to be processed in your Airtable base. 2. Fetch Order Details: n8n receives the record ID from Airtable and fetches the complete order details, including customer information and the specific line items for that order. 3. Create Order in Shopify: All the gathered information is used to create a new, official order directly in your Shopify store. 4. Send Confirmation Email: Once the order is successfully created in Shopify, a professionally formatted HTML order confirmation email is sent to the customer via Gmail. 5. Update Airtable Status: Finally, the workflow updates the original order record in Airtable, marking its status as "Done" to prevent duplicate processing and keep your records in sync. Key Requirements to Use This Template 1. n8n Instance: An active n8n account (Cloud or self-hosted). 2. Airtable Base: An Airtable base on a "Pro" plan or higher (required for Airtable Automations). It should contain tables for Orders and Order Line Items. 3. Shopify Store: An active Shopify store with API access permissions. 4. Gmail Account: A Gmail account to send confirmation emails. Setup Instructions 1. Configure the n8n Workflow: Webhook Node: Activate the workflow to get the Production URL from the "Webhook" node. Copy this URL. Airtable Nodes: In the Get a record and Update record nodes, connect your Airtable credentials and select the correct Base and Table IDs. Shopify Node: In the Create an order node, connect your Shopify store using OAuth2 credentials. Gmail Node: In the Send a message node, connect your Gmail account. 2. Set Up the Airtable Automation (Crucial Step): Go to your Airtable base and click on "Automations". Create a new automation. For the trigger, select "When a record meets conditions". Choose your Orders table and set a condition that makes sense for you (e.g., When "Shopify Ordered" is "Pending"). For the action, choose "Run a script". Paste the code below into the script editor: JavaScript const inputConfig = input.config(); const recordId = inputConfig.recordId; const webhookUrl = 'PASTE_YOUR_N8N_PRODUCTION_URL_HERE'; await fetch(webhookUrl, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ recordId: recordId }), }); ReplacePASTE_YOUR_N8N_PRODUCTION_URL_HERE with the Production URL you copied from n8n. Add an input variable to the script named recordId and set its value to the "Airtable record ID" from the trigger step. Test the script and turn your Airtable Automation ON. Connect with us Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Workflow Automation Click here- Get Started
by Alejandro Scuncia
An extendable triage workflow that classifies severity, sets components, and posts actionable guidance for support engineers using n8n + Gemini + Cache Augmented Generation (CAG). Designed for Jira Service Management, but easily adaptable to Zendesk, Freshdesk, or ServiceNow. Description Support teams loose valuable time when tickets are misclassified: wrong severity, missing components, unclear scope. Engineers end up re-routing issues and chasing missing info instead of solving real problems. This workflow automates triage by combining domain rules with AI-driven classification and guidance, so engineers receive better-prepared tickets. It includes: ✅ Real-time ticket capture via webhook ✅ AI triage for severity and component ✅ CAG-powered guidance: 3 next steps + missing info ✅ Internal audit comment with justifications & confidence ✅ Structured metrics for reporting ⚙️ How It Works This workflow runs in 4 stages: 📥 Entry & Setup Webhook triggers on ticket creation Loads domain rules (priority policy, components, guidance templates) Sets confidence threshold & triage label 🧠 AI Analysis (Gemini + CAG) Builds structured payload with ticket + domain context Gemini proposes severity, component, guidance, missing info Output normalized for safe automation (valid JSON, conservative confidence) 🤖 Update & Audit Updates fields (priority, component, labels) if confidence ≥ threshold Posts internal audit comment with: 3 next steps Missing info to request Justifications + confidence 📊 Metrics Captures applied changes, confidence scores, and API statuses Enables reliability tracking & continuous improvement 🌟 Key Features CAG-powered guidance** → lightning-fast, context-rich next steps Explainable automation** → transparent audit comments for every decision Domain-driven rules** → adaptable to any product or support domain Portable* → swap JSM with *Zendesk, Freshdesk, ServiceNow** via HTTP nodes 🔐 Required Credentials | Tool | Use | |------|-----| | Jira Service Management | Ticketing system (API + comments) | | Google Gemini/Gemma | LLM analysis | | HTTP Basic Auth | For Jira API requests (bot user) | ⚠️ Setup tip: create a dedicated bot user in Jira Service Management with an API token. This ensures clean audit logs, proper permissions, and avoids mixing automation with human accounts. 🧰 Customization Tips Replace https://your-jsm-url/... with your own Jira Service Management domain. Update the credentials with the bot user’s API token created above. Swap Jira Service Management nodes with other ticketing systems like Zendesk, Freshdesk, or ServiceNow. Extend the domain schema (keywords, guidance_addons) to fit your product or support environment. 🗂️ Domain Schema This workflow uses a domain-driven schema to guide triage. It defines: Components** → valid areas for classification Priority policies & rules** → how severity is determined Keywords** → domain-specific signals (e.g., “API error”, “all users affected”) Guidance addons** → contextual next steps for engineers No-workaround phrases** → escalate severity if present ✨ The full domain JSON (with complete keyword & guidance mapping) is included as a sticky note inside the workflow. 💡 Use Cases Automated triage for IT & support tickets Incident classification with outage/security detection Contextual guidance for engineers in customer support Faster escalation and routing of critical issues 🧠 Who It’s For Support teams running Jira Service Management Platform teams automating internal ticket ops AI consultants prototyping practical triage workflows Builders exploring CAG today, RAG tomorrow 🚀 Try It Out! ⚙️ Import the Workflow in n8n (cloud or self-hosted). 🔑 Add Credentials (JSM API + Gemini key). ⚡ Configure Setup (confidence threshold, triage label, domain rules). 🔗 Connect Webhook in JSM → issue_created → n8n webhook URL. 🧪 Test with a Ticket → see auto-updates + AI audit comment. 🔄 Swap the Ticketing System → adapt HTTP nodes for Zendesk, Freshdesk, or ServiceNow. 💬 Have Feedback or Ideas? I’d Love to Hear This project is open, modular, and evolving. If you try it, adapt it, or extend it, I’d love to hear your feedback — let’s improve it together in the n8n builder community. 📧 ascuncia.es@gmail.com 🔗Linkedin