by Yang
Who is this for? This workflow is perfect for eCommerce teams, market researchers, and product analysts who want to track or extract product information from websites that restrict scraping tools. It’s also useful for virtual assistants handling product comparison tasks. What problem is this workflow solving? Many eCommerce and retail sites use dynamic content or anti-bot protections that make traditional scraping methods unreliable. This workflow bypasses those issues by taking a screenshot of the full page, using OCR to extract visible text, and summarizing product information with GPT-4o—all fully automated. What this workflow does This workflow monitors a Google Sheet for new URLs. Once a new link is added, it performs the following steps: Trigger on New URL in Sheet – Watches for new rows added to a Google Sheet. Screenshot URL via Dumpling AI – Sends the URL to Dumpling AI’s screenshot endpoint to capture a full-page image of the product webpage. Save Screenshot to Drive Folder – Uploads the screenshot to a specific Google Drive folder for reference or logging. Extract Text from Screenshot with Dumpling AI – Uses Dumpling AI’s image-to-text endpoint to pull all visible content from the screenshot. Extract Product Info from Screenshot Text with GPT-4o – Sends the extracted raw text to GPT-4o, prompting it to identify structured product information such as product name, price, ratings, deals, and purchase options. Split Each Product Entry – Splits the GPT response (an array of product objects) so each product becomes an individual item for saving. Save Products info to Google Sheet – Appends each product’s structured details to a separate sheet in the same spreadsheet. Setup Google Sheet Create a Google Sheet with at least two sheets: Sheet1 should contain a header row with a column labeled URL. Sheet2 should contain headers: Product Name, price, purchased, ratings, deal, buyingOptions. Connect your Google account in both the trigger and final write-back node. Dumpling AI Sign up at Dumpling AI Create an API key and use it for both HTTP modules: Screenshot URL via Dumpling AI Extract Text from Screenshot with Dumpling AI The screenshot endpoint used is https://app.dumplingai.com/api/v1/screenshot. Google Drive Create a folder for storing screenshots. In the Save Screenshot to Drive Folder node, select the correct folder or provide the folder ID. Make sure permissions allow uploading from n8n. OpenAI Provide an API key for GPT-4o in the Extract Product Info from Screenshot Text with GPT-4o node. The prompt is structured to return structured product listings in JSON format. Split & Save Split Each Product Entry takes the array of product objects from GPT and makes each one a separate execution. Save Products info to Google Sheet writes structured fields into Sheet2 under: Product Name, price, purchased, ratings, deal, buyingOptions. How to customize this workflow Adjust the GPT prompt to return different product fields (e.g., shipping info, product categories). Use a filter node to limit which types of products get written to the final sheet. Add sentiment analysis to analyze review content if available. Replace Google Drive with Dropbox or another file storage app. Notes Make sure you monitor your API usage on both Dumpling AI and OpenAI to avoid rate limits. This setup is great for snapshot-based extraction where scraping is blocked or unreliable.
by David Olusola
n8n Set Node Tutorial - Complete Guide 🎯 How It Works This tutorial workflow teaches you everything about n8n's Set node through hands-on examples. The Set node is one of the most powerful tools in n8n - it allows you to create, modify, and transform data as it flows through your workflow. What makes this tutorial special: Progressive Learning**: Starts simple, builds to complex concepts Interactive Examples**: Real working nodes you can modify and test Visual Guidance**: Sticky notes explain every concept Branching Logic**: Shows how Set nodes work in different workflow paths Real Data**: Uses practical examples you'll encounter in automation The workflow demonstrates 6 core concepts: Basic data types (strings, numbers, booleans) Expression syntax with {{ }} and $json references Complex data structures (objects and arrays) "Keep Only Set" option for clean outputs Conditional data setting with branching logic Data transformation and aggregation techniques 📋 Setup Steps Step 1: Import the Workflow Copy the JSON from the code artifact above Open your n8n instance in your browser Navigate to Workflows section Click "Import from JSON" or the import button (usually a "+" or import icon) Paste the JSON into the import dialog Click "Import" to load the workflow Save the workflow (Ctrl+S or click Save button) Step 2: Choose Your Starting Point Option A: Default Tutorial Mode (Recommended for beginners) The workflow is ready to run as-is Uses simple "Welcome" message as starting data Click "Execute Workflow"** to begin Option B: Rich Test Data Mode (Recommended for experimentation) Locate the nodes: Find "Start (Manual Trigger)" and "0. Test Data Input" Disconnect default: Click the connection line between "Start (Manual Trigger)" → "1. Set Basic Values" and delete it Connect test data: Drag from "0. Test Data Input" output to "1. Set Basic Values" input Execute: Click "Execute Workflow" to run with rich test data Step 3: Execute and Learn Run the workflow: Click the "Execute Workflow" button Check outputs: Click on each node to see its output data Read the notes: Each sticky note explains what's happening Follow the flow: Data flows from left to right, top to bottom Step 4: Experiment and Modify Try These Experiments: 🔧 Change Basic Values: Click on "1. Set Basic Values" Modify user_age (try 20 vs 35) Change user_name to see how it propagates Execute and see the changes flow through 📊 Test Conditional Logic: Set user_age to 20 → triggers "Student Discount" path Set user_age to 30 → triggers "Premium Access" path Watch how the workflow branches differently 🎨 Modify Expressions: In "2. Set with Expressions", try changing: ={{ $json.score * 2 }} to ={{ $json.score * 3 }} ={{ $json.user_name }} Smith to ={{ $json.user_name }} Johnson 🏗️ Complex Data Structures: In "3. Set Complex Data", modify the JSON structure Add new properties to the user_profile object Try nested expressions 🎓 Learning Path Beginner Level (Nodes 1-2) Focus**: Understanding basic Set operations Learn**: Data types, static values, simple expressions Time**: 10-15 minutes Intermediate Level (Nodes 3-4) Focus**: Complex data and output control Learn**: Objects, arrays, "Keep Only Set" option Time**: 15-20 minutes Advanced Level (Nodes 5-6) Focus**: Conditional logic and data aggregation Learn**: Branching workflows, merging data, complex expressions Time**: 20-25 minutes 🔍 What Each Node Teaches | Node | Concept | Key Learning | |------|---------|-------------| | 1. Set Basic Values | Data Types | String, number, boolean basics | | 2. Set with Expressions | Dynamic Data | {{ }} syntax, $json references, $now functions | | 3. Set Complex Data | Advanced Structures | Objects, arrays, nested properties | | 4. Set Clean Output | Data Management | "Keep Only Set" for clean final outputs | | 5a/5b. Conditional Sets | Branching Logic | Different data based on conditions | | 6. Tutorial Summary | Data Aggregation | Combining and summarizing workflow data | 💡 Pro Tips 🚀 Quick Wins: Always check node outputs after execution Use sticky notes as your learning guide Experiment with small changes first Copy nodes to try variations 🛠️ Advanced Techniques: Use Keep Only Set for API responses Combine static and dynamic data in complex objects Leverage conditional paths for different user types Reference nested object properties with dot notation 🐛 Troubleshooting: If expressions don't work, check the {{ }} syntax Ensure field names match exactly (case-sensitive) Use the expression editor for complex logic Check data types match your expectations 🎯 Next Steps After Tutorial Create your own Set nodes in a new workflow Practice with real data from APIs or databases Build data transformation workflows for your specific use cases Combine Set nodes with other n8n nodes like HTTP, Webhook, etc. Explore advanced expressions using JavaScript functions Congratulations! You now have the foundation to use Set nodes effectively in any n8n workflow. The Set node is truly the "Swiss Army knife" of n8n automation! 🛠️
by Oneclick AI Squad
This automated n8n workflow continuously monitors airline schedule changes by fetching real-time flight data, comparing it with stored schedules, and instantly notifying both internal teams and affected passengers through multiple communication channels. The system ensures stakeholders are immediately informed of any flight delays, cancellations, gate changes, or other critical updates. Good to Know Flight data accuracy depends on the aviation API provider's update frequency and reliability Critical notifications (cancellations, major delays) trigger immediate passenger alerts via SMS and email Internal Slack notifications keep operations teams informed in real-time Database logging maintains a complete audit trail of all schedule changes The system processes only confirmed schedule changes to avoid false notifications Passenger notifications are sent only to those with confirmed tickets for affected flights How It Works Schedule Trigger - Automatically runs every 30 minutes to check for flight schedule updates Fetch Airline Data - Retrieves current flight information from aviation APIs Get Current Schedules - Pulls existing schedule data from the internal database Process Changes - Compares API data with database records to identify schedule changes Check for Changes - Determines if any updates require processing and notifications Update Database - Saves schedule changes to the internal flight database Notify Slack Channel - Sends operational updates to the flight operations team Check Urgent Notifications - Identifies critical changes requiring immediate passenger alerts Get Affected Passengers - Retrieves contact information for passengers on changed flights Send Email Notifications - Dispatches detailed schedule change emails via SendGrid Send SMS (Critical Only) - Sends urgent text alerts for cancellations and major delays Update Internal Systems - Syncs changes with other airline systems via webhooks Log Sync Activity - Records all synchronization activities for audit and monitoring Data Sources The workflow integrates with multiple data sources and systems: Aviation API (Primary Data Source) Real-time flight status and schedule data Departure/arrival times, gates, terminals Flight status (on-time, delayed, cancelled, diverted) Aircraft and route information Internal Flight Database flight_schedules table - Current schedule data with columns: flight_number (text) - Flight identifier (e.g., "AA123") departure_time (timestamp) - Scheduled departure time arrival_time (timestamp) - Scheduled arrival time status (text) - Flight status (active, delayed, cancelled, diverted) gate (text) - Departure gate number terminal (text) - Terminal identifier airline_code (text) - Airline IATA code origin_airport (text) - Departure airport code destination_airport (text) - Arrival airport code aircraft_type (text) - Aircraft model updated_at (timestamp) - Last update timestamp created_at (timestamp) - Record creation timestamp passengers table - Passenger contact information with columns: passenger_id (integer) - Unique passenger identifier name (text) - Full passenger name email (text) - Email address for notifications phone (text) - Mobile phone number for SMS alerts notification_preferences (json) - Communication preferences created_at (timestamp) - Registration timestamp updated_at (timestamp) - Last profile update tickets table - Booking and ticket status with columns: ticket_id (integer) - Unique ticket identifier passenger_id (integer) - Foreign key to passengers table flight_number (text) - Flight identifier flight_date (date) - Travel date seat_number (text) - Assigned seat ticket_status (text) - Status (confirmed, cancelled, checked-in) booking_reference (text) - Booking confirmation code fare_class (text) - Ticket class (economy, business, first) created_at (timestamp) - Booking timestamp updated_at (timestamp) - Last modification timestamp sync_logs table - Audit trail and system logs with columns: log_id (integer) - Unique log identifier workflow_name (text) - Name of the workflow that created the log total_changes (integer) - Number of schedule changes processed sync_status (text) - Status (completed, failed, partial) sync_timestamp (timestamp) - When the sync occurred details (json) - Detailed log information and changes error_message (text) - Error details if sync failed execution_time_ms (integer) - Processing time in milliseconds Communication Channels Slack - Internal team notifications SendGrid - Passenger email notifications Twilio - Critical SMS alerts Internal webhooks - System integrations How to Use Import the workflow into your n8n instance Configure aviation API credentials (AviationStack, FlightAware, or airline-specific APIs) Set up PostgreSQL database connection with required tables Configure Slack bot token for operations team notifications Set up SendGrid API key and email templates for passenger notifications Configure Twilio credentials for SMS alerts (critical notifications only) Test with sample flight data to verify all notification channels Adjust monitoring frequency and severity thresholds based on operational needs Monitor sync logs to ensure reliable data synchronization Requirements API Access Aviation data provider (AviationStack, FlightAware, etc.) SendGrid account for email delivery Twilio account for SMS notifications Slack workspace and bot token Database Setup PostgreSQL database with flight schedule tables Passenger and ticket management tables Audit logging tables for tracking changes Infrastructure n8n instance with appropriate node modules Reliable internet connection for API calls Proper credential management and security Customizing This Workflow Modify the Process Changes node to adjust change detection sensitivity, add custom business rules, or integrate additional data sources like weather or airport operational data. Customize notification templates in the email and SMS nodes to match your airline's branding and communication style. Adjust the Schedule Trigger frequency based on your operational requirements and API rate limits.
by InfraNodus
This template can be used to generate research questions from PDF documents (e.g. research papers, market reports) based on the content gaps found in text using the InfraNodus knowledge graph GraphRAG knowledge graph representation. Simply upload several PDF files (research papers, corporate or market reports, etc) and generate a research question / AI prompt in seconds. The template is useful for: generating research questions generating AI prompts that drive research further finding blind spots in any discourse and generating ideas that address them. avoiding the generic bias of LLM models and focusing on what's important in your particular context Using Content Gaps for Generating Research Questions Knowledge graphs represent any text as a network: the main concepts are the nodes, their co-occurrences are the connections between them. Based on this representation, we build a graph and apply network science metrics to rank the most important nodes (concepts) that serve as the crossroads of meaning and also the main topical clusters that they connect. Naturally, some of the clusters will be disconnected and will have gaps between them. These are the topics (groups of concepts) that exist in this context (the documents you uploaded) but that are not very well connected. Addressing those gaps can help you see which groups of concepts you could connect with your own ideas. This is exactly what InfraNodus does: builds the structure, finds the gaps, then uses the built-in AI to generate research questions that bridge those gaps. How it works 1) Step 1: First, you upload your PDF files using an online web form, which you can run from n8n or even make publicly available. 2) Steps 2-4: The documents are processed using the Code and PDF to Text nodes to extract plain text from them. 3) Step 5: This text is then sent to the InfraNodus GraphRAG node that creates a knowledge graph, identifies structural gaps in this graph, and then uses built-in AI to research questions / prompts. 4) Step 6: The ideas are then shown to the user in the same web form. Optionally, you can hook this template to your own workflow and send the question generated to an InfraNodus expert or your own AI model / agent for further processing. If you'd like to sync this workflow to PDF files in a Google Drive folder, you can copy our Google Drive PDF processing workflow for n8n. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key. Add this key into the InfraNodus GraphRAG HTTP node(s) you use in this workflow. You do not need any OpenAI keys for this to work. Optionally, you can change the settings in the Step 4 of this workflow and enforce it to always use the biggest gap it identifies. Requirements An InfraNodus account and API key Note: OpenAI key is not required. You will have direct access to the InfraNodus AI with the API key. Customizing this workflow You can use this same workflow with a Telegram bot or Slack (to be notified of the summaries and ideas). You can also hook up automated social media content creation workflows in the end of this template, so you can generate posts that are relevant (covering the important topics in your niche) but also novel (because they connect them in a new way). Check out our n8n templates for ideas at https://n8n.io/creators/infranodus/ Also check the full tutorial with a conceptual explanation at https://support.noduslabs.com/hc/en-us/articles/20454382597916-Beat-Your-Competition-Target-Their-Content-Gaps-with-this-n8n-Automation-Workflow Also check out the video introduction to InfraNodus to better understand how knowledge graphs and content gaps work: For support and help with this workflow, please, contact us at https://support.noduslabs.com
by Taiki
Workflow Setup Guide This workflow collects the most-viewed videos from specified YouTube channels and saves the data to a Google Sheet. Follow these steps to set it up: 1. Credentials Setup Google Sheets:** You need to have a Google Sheets credential configured in your n8n instance. If you don't have one, go to the 'Credentials' section in n8n and add a new credential for Google Sheets. YouTube API Key:** You need a YouTube Data API v3 key. Go to the Google Cloud Console. Create a new project or select an existing one. Go to 'APIs & Services' > 'Library' and enable the YouTube Data API v3. Go to 'APIs & Services' > 'Credentials', click 'Create Credentials', and choose 'API key'. Copy the generated API key. 2. Google Sheet Setup You will need one Google Sheet with two separate sheets (tabs) inside it. Input Sheet Use Template This sheet provides the list of YouTube channels to process. Required Columns:** Create a sheet with the following two columns: ChannelID: The ID of the YouTube channel (e.g., T7M3PpjBZzw). video_num_to_get: The number of top videos to retrieve for that channel (e.g., 5). Output Sheet This sheet is where the results will be saved. Required Columns:** The workflow will automatically append data to the following columns. You can create them beforehand or let the workflow do it. channelName title videoId videoLink 3. Node Configuration Read Channel Info from Sheet:** Select your Google Sheets credential. Enter your Spreadsheet ID. Enter the name of your Input Sheet. Fetch Most-Viewed Videos via YouTube API:** Replace YOUR_YOUTUBE_API_KEY with the API key you generated in Step 1. Append Video Details to Sheet:** Select your Google Sheets credential. Enter your Spreadsheet ID (the same one as before). Enter the name of your Output Sheet.
by Rex Lui
Easily generate QR codes from any URL! This workflow lets users submit a URL via a simple form and instantly receive a downloadable QR code image—perfect for quick sharing or promotions. Setup is fast and user-friendly, so you’ll be up and running in minutes! 🚀 How it works The end user submits a URL through a simple online form. The workflow automatically sends the submitted URL to a QR code generation API. The user receives a downloadable QR code image corresponding to their URL. ⚙️ Setup instruction Import Workflow: Click "Import from JSON" in your n8n environment and paste the provided workflow JSON. Click "Save" and activate the workflow. Double click the "On form submission" node to obtain the production URL. You may now use this URL to do QR code generation.
by Yaron Been
🧨 VIP Radar: Instantly Spot & Summarize High-Value Shopify Orders with AI + Slack Alerts Automatically detect when a new Shopify order exceeds $200, fetch the customer’s purchase history, generate an AI-powered summary, and alert your team in Slack—so no VIP goes unnoticed. 🛠️ Workflow Overview | Feature | Description | |------------------------|-----------------------------------------------------------------------------| | Trigger | Shopify “New Order” webhook | | Conditional Check | Filters for orders > $200 | | Data Enrichment | Pulls full order history for the customer from Shopify | | AI Summary | Uses OpenAI to summarize buying behavior | | Notification | Sends detailed alert to Slack with name, order total, and customer insights | | Fallback | Ignores low-value orders and terminates flow | 📘 What This Workflow Does This automation monitors your Shopify store and reacts to any high-value order (over $200). When triggered: It fetches all past orders of that customer, Summarizes the history using OpenAI, Sends a full alert with context to your Slack channel. No more guessing who’s worth a closer look. Your team gets instant insights, and your VIPs get the attention they deserve. 🧩 Node-by-Node Breakdown 🔔 1. Trigger: New Shopify Order Type**: Shopify Trigger Event**: orders/create Purpose**: Starts workflow on new order Pulls**: Order total, customer ID, name, etc. 🔣 2. Set: Convert Order Total to Number Ensures the total_price is treated as a number for comparison. ❓ 3. If: Is Order > $200? Condition**: $json.total_price > 200 Yes** → Continue No** → End workflow 🔗 4. HTTP: Fetch Customer Order History Uses the Shopify Admin API to retrieve all orders from this customer. Requires your Shopify access token. 🧾 5. Set: Convert Orders Array to String Formats the order data so it's prompt-friendly for OpenAI. 🧠 6. LangChain Agent: Summarize Order History Prompt**: "Summarize the customer's order history for Slack. Here is their order data: {{ $json.orders }}" Model**: GPT-4o Mini (customizable) 📨 7. Slack: Send VIP Alert Sends a rich message to a Slack channel. Includes: Customer name Order value Summary of past behavior 🧱 8. No-Op (Optional) Used to safely end workflow if the order is not high-value. 🔧 How to Customize | What | How | |--------------------------|----------------------------------------------------------------------| | Order threshold | Change 200 in the If node | | Slack channel | Update channelId in the Slack node | | AI prompt style | Edit text in LangChain Agent node | | Shopify auth token | Replace shpat_abc123xyz... with your actual private token | 🚀 Setup Instructions Open n8n editor. Go to Workflows → Import → Paste JSON. Paste this workflow JSON. Replace your Shopify token and Slack credentials. Save and activate. Place a test order in Shopify to watch it work. 💡 Real-World Use Cases 🎯 Notify sales team when a potential VIP buys 🛎️ Prep support reps with customer history 📈 Detect repeat buyers and upsell opportunities 🔗 Resources & Support 👨💻 Creator: Yaron Been 📺 YouTube: NoFluff with Yaron Been 🌐 Website: https://nofluff.online 📩 Contact: Yaron@nofluff.online 🏷️ Tags #shopify, #openai, #slack, #vip-customers, #automation, #n8n, #workflow, #ecommerce, #customer-insights, #ai-summaries, #gpt4o
by Jonathan
You still can use the app in a workflow even if we don’t have a node for that or the existing operation for that. With the HTTP Request node, it is possible to call any API point and use the incoming data in your workflow Main use cases: Connect with apps and services that n8n doesn’t have integration with Web scraping How it works This workflow can be divided into three branches, each serving a distinct purpose: 1.Splitting into Items (HTTP Request - Get Mock Albums): The workflow initiates with a manual trigger (On clicking 'execute'). It performs an HTTP request to retrieve mock albums data from "https://jsonplaceholder.typicode.com/albums." The obtained data is split into items using the Item Lists node, facilitating easier management. 2.Data Scraping (HTTP Request - Get Wikipedia Page and HTML Extract): Another branch of the workflow involves fetching a random Wikipedia page using an HTTP request to "https://en.wikipedia.org/wiki/Special:Random." The HTML Extract node extracts the article title from the fetched Wikipedia page. 3.Handling Pagination (The final branch deals with handling pagination for a GitHub API request): It sends an HTTP request to "https://api.github.com/users/that-one-tom/starred," with parameters like the page number and items per page dynamically set by the Set node. The workflow uses conditions (If - Are we finished?) to check if there are more pages to retrieve and increments the page number accordingly (Set - Increment Page). This process repeats until all pages are fetched, allowing for comprehensive data retrieval.
by phil
This workflow automates voice reminders for upcoming appointments by generating a professional audio message and sending it to clients via email with the voice file attached. It integrates Google Calendar to track appointments, ElevenLabs to generate high-quality voice messages, and Gmail to deliver them efficiently. Who Needs Automated Voice Appointment Reminders? This automated voice appointment reminder system is ideal for businesses that rely on scheduled appointments. It helps reduce no-shows, improve client engagement, and streamline communication. Medical Offices & Clinics – Ensure patients receive timely appointment reminders. Real Estate Agencies – Keep potential buyers and renters informed about property visits. Service-Based Businesses – Perfect for salons, consultants, therapists, and coaches. Legal & Financial Services – Help clients remember important meetings and consultations. If your business depends on scheduled appointments, this workflow saves time and enhances client satisfaction. 🚀 Why Use This Workflow? Ensures clients receive timely reminders. Reduces appointment no-shows and scheduling issues. Automates the process with a personalized voice message. Step-by-Step: How This Workflow Automates Voice Reminders Trigger the Workflow – The system runs manually or on a schedule to check upcoming appointments in Google Calendar. Retrieve Appointment Data – It fetches event details (client name, time, and location) from Google Calendar. The workflow uses the summary, start.dateTime, location, and attendees[0].email fields from Google Calendar to personalize and send the voice reminders. Generate a Voice Reminder – Using ElevenLabs, the workflow converts the appointment details into a natural-sounding voice message. Send via Email – The generated audio file is attached to an email and sent to the client as a reminder. Customization: Tailor the Workflow to Your Business Needs Adjust Trigger Frequency – Modify the scheduling to run daily, hourly, or at specific intervals. Customize Voice Message Format – Change the script structure and voice tone to match your business needs. Change Notification Method – Instead of email, integrate SMS or WhatsApp for delivery. 🔑 Prerequisites Google Calendar Access** – Ensure you have access to the calendar with scheduled appointments. ElevenLabs API Key – Required for generating voice messages (you can start for free). Gmail API Access** – Needed for sending reminder emails. n8n Setup** – The workflow runs on an n8n instance (self-hosted or cloud). 🚀 Step-by-Step Installation & Setup Set Up Google Calendar API** Go to Google Cloud Console. Create a new project and enable Google Calendar API. Generate OAuth 2.0 credentials and save them for n8n. Get an ElevenLabs API Key** Sign up at ElevenLabs. Retrieve your API key from the dashboard. Configure Gmail API** Enable Gmail API in Google Cloud Console. Create OAuth credentials and authorize your email address for sending. Deploy n8n & Install the Workflow** Install n8n (Installation Guide). Add the required Google Calendar, ElevenLabs, and Gmail nodes. Import or build the workflow with the correct credentials. Test and fine-tune as needed. ⚠ Important: The LangChain Community node used in this workflow only works on self-hosted n8n instances. It is not compatible with n8n Cloud. Please ensure you are running a self-hosted instance before using this workflow. Summary This workflow ensures a professional and seamless experience for your clients, keeping them informed and engaged. 🚀🔊 Phil | Inforeole
by Aitor | 1Node
Elevate your Stripe workflows with an AI agent that intelligently, securely, and interactively handles essential Stripe data operations. Leveraging the Kimi K2 model via OpenRouter, this n8n template enables safe data retrieval. From fetching summarized financial insights to managing customer discounts, while strictly enforcing privacy, concise outputs, and operational boundaries. 🧾 Requirements Stripe: Active Stripe account API key with read and write access. n8n: Deployed n8n instance (cloud or self-hosted) OpenRouter: Active OpenRouter account with credit API key from OpenRouter 🔗 Useful Links Stripe n8n Stripe Credentials Setup OpenRouter 🚦 Workflow Breakdown Trigger: User Request Workflow initiates when an authenticated user sends a message in the chat trigger. AI Agent (Kimi K2 OpenRouter): Intent Analysis Determines whether the user wants to: List customers, charges, or coupons Retrieve the account’s balance Create a new coupon in Stripe Filters unsupported or unclear requests, explaining permissions or terminology as needed. Stripe Data Retrieval For data queries: Only returns summarized, masked lists (e.g., last 10 transactions/customers) Sensitive details, such as card numbers, are automatically masked or truncated Never exposes or logs confidential information Coupon Creation When a coupon creation is requested: AI agent collects coupon parameters (discount, expiration, restrictions) Clearly summarizes the action and requires explicit user confirmation before proceeding Creates the coupon upon confirmation and replies with only the public-safe coupon details 🛡️ Privacy & Security No data storage:** All responses are ephemeral; sensitive Stripe data is never retained. Strict minimization:** Outputs are tightly scoped; only partial identifiers are shown and only when necessary. Retention rules enforced:** No logs, exports, or secondary storage of Stripe data. Confirmation required:** Actions modifying Stripe (like coupon creation) always require the user to approve before execution. Compliance-ready:** Aligned with Stripe and general data protection standards. ⏱️ Setup Steps Setup time: 10–15 minutes Add Stripe API credentials in n8n Add the OpenRouter API credentials in n8n and select your desired AI model to run the agent. In our template we selected Kimi K2 from Moonshot AI. ✅ Summary This workflow template connects a privacy-prioritized AI agent (Kimi K2 via OpenRouter) with your Stripe account to enable: Fast, summarized access to customer, transaction, coupon, and balance data Secure, confirmed creation of discounts/coupons Complete adherence to authorization, privacy, and operational best practices 🙋♂️ Need Help? Feel free to contact us at 1 Node Get instant access to a library of free resources we created.
by Hendriekus
Find OAuth URIs with AI Llama Overview: The AI agent identifies: Authorization URI Token URI Audience Methodology: Confidence scoring is utilized to assess the trustworthiness of extracted data: Score Range: 0 < x ≤ 1 Score Granularity: 0.01 increments Model Details: Leveraging the Wayfarer Large 70b Llama 3.3 model. How it works: This template is designed to assist users in obtaining OAuth2 settings using AI-powered insights. It is ideal for developers, IT professionals, or anyone working with APIs that require OAuth2 authentication. By leveraging the AI agent, users can simplify the process of extracting and validating key details such as the authorization_url, token_url, and audience. Set up instructions: 1. Configuration Nodes Structured Output Node**: Parses the AI model's output using a predefined JSON schema. This ensures the data is structured for downstream processing. Code Node**: If the AI model’s output does not match the required format, use the Code node to re-arrange and transform the data. Example code snippets are provided below for common scenarios. 2. AI Model Prompt The prompt for the AI model includes: A detailed structure and objectives of the query. Flexibility for the model to improvise when accurate results cannot be determined. 3. Confidence Scoring The AI model assigns a confidence score (0 < x ≤ 1) to indicate the reliability of the extracted data. Scores are provided in increments of 0.01 for granularity. Adaptability Customize this template: Update the AI model prompt with details specific to your API or OAuth2 setup. Adjust the JSON schema in the Structured Output node to match the data format. Modify the Code logic to suit the application's requirements.
by Adam Bertram
An AI-powered chat assistant that analyzes Azure virtual machine activity and generates detailed timeline reports showing VM state changes, performance metrics, and operational events over time. How It Works The workflow starts with a chat trigger that accepts user queries about Azure VM analysis. A Google Gemini AI agent processes these requests and uses six specialized tools to gather comprehensive VM data from Azure APIs. The agent queries resource groups, retrieves VM configurations and instance views, pulls performance metrics (CPU, network, disk I/O), and collects activity log events. It then analyzes this data to create timeline reports showing what happened to VMs during specified periods, defaulting to the last 90 days unless the user specifies otherwise. Prerequisites To use this template, you'll need: n8n instance (cloud or self-hosted) Azure subscription with virtual machines Microsoft Azure Monitor OAuth2 API credentials Google Gemini API credentials Proper Azure permissions to read VM data and activity logs Setup Instructions Import the template into n8n. Configure credentials: Add Microsoft Azure Monitor OAuth2 API credentials with read permissions for VMs and activity logs Add Google Gemini API credentials Update workflow parameters: Open the "Set Common Variables" node Replace <your azure subscription id here> with your actual Azure subscription ID Configure triggers: The chat trigger will automatically generate a webhook URL for receiving chat messages No additional trigger configuration needed Test the setup to ensure it works. Security Considerations Use minimum required Azure permissions (Reader role on subscription or resource groups). Store API credentials securely in n8n credential store. The Azure Monitor API has rate limits, so avoid excessive concurrent requests. Chat sessions use session-based memory that persists during conversations but doesn't retain data between separate chat sessions. Extending the Template You can add more Azure monitoring tools like disk metrics, network security group logs, or Application Insights data. The AI agent can be enhanced with additional tools for Azure cost analysis, security recommendations, or automated remediation actions. You could also integrate with alerting systems or export reports to external storage or reporting platforms.