by Rex Lui
Easily generate QR codes from any URL! This workflow lets users submit a URL via a simple form and instantly receive a downloadable QR code image—perfect for quick sharing or promotions. Setup is fast and user-friendly, so you’ll be up and running in minutes! 🚀 How it works The end user submits a URL through a simple online form. The workflow automatically sends the submitted URL to a QR code generation API. The user receives a downloadable QR code image corresponding to their URL. ⚙️ Setup instruction Import Workflow: Click "Import from JSON" in your n8n environment and paste the provided workflow JSON. Click "Save" and activate the workflow. Double click the "On form submission" node to obtain the production URL. You may now use this URL to do QR code generation.
by Jonathan
You still can use the app in a workflow even if we don’t have a node for that or the existing operation for that. With the HTTP Request node, it is possible to call any API point and use the incoming data in your workflow Main use cases: Connect with apps and services that n8n doesn’t have integration with Web scraping How it works This workflow can be divided into three branches, each serving a distinct purpose: 1.Splitting into Items (HTTP Request - Get Mock Albums): The workflow initiates with a manual trigger (On clicking 'execute'). It performs an HTTP request to retrieve mock albums data from "https://jsonplaceholder.typicode.com/albums." The obtained data is split into items using the Item Lists node, facilitating easier management. 2.Data Scraping (HTTP Request - Get Wikipedia Page and HTML Extract): Another branch of the workflow involves fetching a random Wikipedia page using an HTTP request to "https://en.wikipedia.org/wiki/Special:Random." The HTML Extract node extracts the article title from the fetched Wikipedia page. 3.Handling Pagination (The final branch deals with handling pagination for a GitHub API request): It sends an HTTP request to "https://api.github.com/users/that-one-tom/starred," with parameters like the page number and items per page dynamically set by the Set node. The workflow uses conditions (If - Are we finished?) to check if there are more pages to retrieve and increments the page number accordingly (Set - Increment Page). This process repeats until all pages are fetched, allowing for comprehensive data retrieval.
by Aitor | 1Node
Elevate your Stripe workflows with an AI agent that intelligently, securely, and interactively handles essential Stripe data operations. Leveraging the Kimi K2 model via OpenRouter, this n8n template enables safe data retrieval. From fetching summarized financial insights to managing customer discounts, while strictly enforcing privacy, concise outputs, and operational boundaries. 🧾 Requirements Stripe: Active Stripe account API key with read and write access. n8n: Deployed n8n instance (cloud or self-hosted) OpenRouter: Active OpenRouter account with credit API key from OpenRouter 🔗 Useful Links Stripe n8n Stripe Credentials Setup OpenRouter 🚦 Workflow Breakdown Trigger: User Request Workflow initiates when an authenticated user sends a message in the chat trigger. AI Agent (Kimi K2 OpenRouter): Intent Analysis Determines whether the user wants to: List customers, charges, or coupons Retrieve the account’s balance Create a new coupon in Stripe Filters unsupported or unclear requests, explaining permissions or terminology as needed. Stripe Data Retrieval For data queries: Only returns summarized, masked lists (e.g., last 10 transactions/customers) Sensitive details, such as card numbers, are automatically masked or truncated Never exposes or logs confidential information Coupon Creation When a coupon creation is requested: AI agent collects coupon parameters (discount, expiration, restrictions) Clearly summarizes the action and requires explicit user confirmation before proceeding Creates the coupon upon confirmation and replies with only the public-safe coupon details 🛡️ Privacy & Security No data storage:** All responses are ephemeral; sensitive Stripe data is never retained. Strict minimization:** Outputs are tightly scoped; only partial identifiers are shown and only when necessary. Retention rules enforced:** No logs, exports, or secondary storage of Stripe data. Confirmation required:** Actions modifying Stripe (like coupon creation) always require the user to approve before execution. Compliance-ready:** Aligned with Stripe and general data protection standards. ⏱️ Setup Steps Setup time: 10–15 minutes Add Stripe API credentials in n8n Add the OpenRouter API credentials in n8n and select your desired AI model to run the agent. In our template we selected Kimi K2 from Moonshot AI. ✅ Summary This workflow template connects a privacy-prioritized AI agent (Kimi K2 via OpenRouter) with your Stripe account to enable: Fast, summarized access to customer, transaction, coupon, and balance data Secure, confirmed creation of discounts/coupons Complete adherence to authorization, privacy, and operational best practices 🙋♂️ Need Help? Feel free to contact us at 1 Node Get instant access to a library of free resources we created.
by Adam Bertram
An AI-powered chat assistant that analyzes Azure virtual machine activity and generates detailed timeline reports showing VM state changes, performance metrics, and operational events over time. How It Works The workflow starts with a chat trigger that accepts user queries about Azure VM analysis. A Google Gemini AI agent processes these requests and uses six specialized tools to gather comprehensive VM data from Azure APIs. The agent queries resource groups, retrieves VM configurations and instance views, pulls performance metrics (CPU, network, disk I/O), and collects activity log events. It then analyzes this data to create timeline reports showing what happened to VMs during specified periods, defaulting to the last 90 days unless the user specifies otherwise. Prerequisites To use this template, you'll need: n8n instance (cloud or self-hosted) Azure subscription with virtual machines Microsoft Azure Monitor OAuth2 API credentials Google Gemini API credentials Proper Azure permissions to read VM data and activity logs Setup Instructions Import the template into n8n. Configure credentials: Add Microsoft Azure Monitor OAuth2 API credentials with read permissions for VMs and activity logs Add Google Gemini API credentials Update workflow parameters: Open the "Set Common Variables" node Replace <your azure subscription id here> with your actual Azure subscription ID Configure triggers: The chat trigger will automatically generate a webhook URL for receiving chat messages No additional trigger configuration needed Test the setup to ensure it works. Security Considerations Use minimum required Azure permissions (Reader role on subscription or resource groups). Store API credentials securely in n8n credential store. The Azure Monitor API has rate limits, so avoid excessive concurrent requests. Chat sessions use session-based memory that persists during conversations but doesn't retain data between separate chat sessions. Extending the Template You can add more Azure monitoring tools like disk metrics, network security group logs, or Application Insights data. The AI agent can be enhanced with additional tools for Azure cost analysis, security recommendations, or automated remediation actions. You could also integrate with alerting systems or export reports to external storage or reporting platforms.
by Jimleuk
Note: This template only works for self-hosted n8n. This n8n template demonstrates how to use the Langchain code node to track token usage and cost for every LLM call. This is useful if your templates handle multiple clients or customers and you need a cheap and easy way to capture how much of your AI credits they are using. How it works In our mock AI service, we're offering a data conversion API to convert Resume PDFs into JSON documents. A form trigger is used to allow for PDF upload and the file is parsed using the Extract from File node. An Edit Fields node is used to capture additional variables to send to our log. Next, we use the Information Extractor node to organise the Resume data into the given JSON schema. The LLM subnode attached to the Information Extractor is a custom one we've built using the Langchain Code node. With our custom LLM subnode, we're able to capture the usage metadata using lifecycle hooks. We've also attached a Google Sheet tool to our LLM subnode, allowing us to send our usage metadata to a google sheet. Finally, we demonstrate how you can aggregate from the google sheet to understand how much AI tokens/costs your clients are liable for. Check out the example Client Usage Log - https://docs.google.com/spreadsheets/d/1AR5mrxz2S6PjAKVM0edNG-YVEc6zKL7aUxHxVcffnlw/edit?usp=sharing How to use SELF-HOSTED N8N ONLY** - the Langchain Code node is only available in the self-hosted version of n8n. It is not available in n8n cloud. The LLM subnode can only be attached to non-"AI agent" nodes; Basic LLM node, Information Extractor, Question & Answer Chain, Sentiment Analysis, Summarization Chain and Text Classifier. Requirements Self-hosted version of n8n OpenAI for LLM Google Sheets to store usage metadata Customising this template Bring the custom LLM subnode into your own templates! In many cases, it can be a drop-in replacement for the regular OpenAI subnode. Not using Google Sheets? Try other databases or a HTTP call to pipe into your CRM.
by Ranjan Dailata
Who is this for? This workflow is designed for HR professionals, employer branding teams, talent acquisition strategists, market researchers, and business intelligence analysts who want to monitor, understand, and act upon employee sentiment and company perception on Glassdoor. It's ideal for organizations that value real-time feedback, are tracking employer brand perception, or need summarized insights for leadership reporting without sifting through thousands of raw reviews. What problem is this workflow solving? Manually reviewing and analyzing Glassdoor reviews is tedious, subjective, and not scalable especially for larger companies or those with many subsidiaries. This workflow: Automates review collection by making a Glassdoor company request via the Bright Data Web Scrapper API. Uses Google Gemini to summarize the content. Sends an actionable summary to HR dashboards, leadership teams, or alert systems via the Webhook notification. What this workflow does Makes an HTTP Request to Glassdoor via the Bright Data Web Scrapper API. Polls the BrightData Glassdoor for the completion of the request. Downloads the Glassdoor response when a new snapshot is ready. Sends the prompt to Google Gemini for summarization. Delivers the summarized insights (strengths, weaknesses, sentiment, patterns) to a configured webhook or dashboard endpoint. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). A webhook or endpoint to receive the summary (e.g., Slack, Notion, or custom HR dashboard). How to customize this workflow to your needs Change Summary Focus by updating the Summarization of Glassdoor Response node Summarization methods and prompts to extract specific insights: Cultural feedback Leadership issues Compensation comments Exit motivation Update the HTTP Request to Glassdoor node with a specific Glassdoor Company information that you are looking for. Format the output to produce a customized summary to Markdown or HTML for rich delivery. Integrate with HR Systems BambooHR, Workday, SAP SuccessFactors via API. Google Sheets or Airtable
by ikbendion
Reddit Poster to Discord This workflow checks Reddit every 15 minutes for new posts and sends selected posts to a Discord channel via webhook. Flow Overview: Schedule Trigger Runs every 15 minutes. Fetch Latest Posts Retrieves up to 3 new posts from any subreddit. Filter Posts Skips moderator or announcement posts based on author ID. Fetch Full Post Data Gets full details for the remaining post. Extract Image URL Parses the post to extract a direct image link. Send to Discord Sends the post title, image, and link to a Discord webhook. Setup Notes: Create a Reddit app and connect credentials in n8n. Add your subreddit name to both Reddit nodes. Connect a Discord webhook for posting.
by Angel Menendez
Who is this for? This workflow is perfect for HR teams, recruiters, and hiring platforms that need to automate the extraction of key candidate details—like name, email, skills, and education—from resume files submitted in various formats. What problem does this solve? Manually reviewing and extracting structured data from resumes is time-consuming and error-prone. This automation eliminates that bottleneck, standardizing candidate data for seamless integration into CRMs, applicant tracking systems, or Google Sheets. What this workflow does This n8n template listens for uploaded resume files, detects their format (PDF, DOC, TXT, CSV, etc.), and automatically extracts the raw text using n8n’s built-in file extraction tools. The extracted text is then parsed using an OpenAI-powered agent that returns structured fields such as: Full Name Email Address Skill Keywords Education Details Optionally, you can push the structured output to Google Sheets (node included, currently disabled). Setup Clone this workflow into your n8n instance. Enable the When chat message received trigger if using n8n chat. Provide your OpenAI credentials and enable the LangChain Agent node. (Optional) Connect Google Sheets by authenticating with your Google account and filling in your target document and sheet. Watch the setup and demo video here: 🎥 https://youtu.be/2SUPiNmLWdA How to customize Modify the OpenAI system message to extract different fields (e.g., phone number, LinkedIn). Replace the Google Sheets node with a webhook to push results to your ATS. Add filters to limit accepted file types or max file size. > ⚠️ This template is designed to be secure. It uses credentials stored in the n8n credential manager—no hardcoded secrets required.
by Agentick AI
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. **This n8n template automates candidate outreach, call transcription, and structured feedback capture for HR teams and recruiters. It triggers on a new candidate row added in a Google Sheet, initiates a call using Vapi.ai, processes the transcript using Google Gemini, extracts key information like CTC, experience, and notice period, and then updates the same Google Sheet with parsed insights. This is ideal for recruiters or HR teams conducting high-volume candidate outreach and wanting to scale initial data collection using automated voice bots and AI transcription analysis.** How it works Trigger: Listens for new rows added to a Google Sheet (e.g., a new candidate lead). Call Initiation: Uses Vapi.ai to make a phone call to the candidate using an assistant bot. Transcript Retrieval: After the call, fetches the conversation transcript from the Vapi API. AI Transcript Analysis: Google Gemini parses the transcript and extracts structured fields like: Work experience Current & expected CTC Notice period & negotiability Work preferences and location Data Mapping: Extracted insights are mapped to structured JSON fields. Google Sheet Update: The same row in the source Sheet is updated with the collected information. Use Cases Pre-screening calls for job applicants Collecting missing candidate information asynchronously Replacing manual HR data entry with AI-powered automation Smart CRM updates from voice interactions Requirements Before you run this workflow, ensure the following: ✅ Google account with access to Google Sheets API ✅ Vapi.ai account with: Assistant ID Phone number ID Active API key ✅ Google Gemini API (via PaLM) enabled ✅ n8n version 1.40.0 or later with relevant credentials configured How to use Import the workflow into n8n. Set up your credentials for: Google Sheets Trigger Google Sheets Vapi.ai (add Bearer token) Google Gemini Replace the placeholder values in: Assistant ID Phone number ID Google Sheet ID and tab Start the workflow and add a row to the Google Sheet. Wait for the automated call and let the AI extract and populate the data. Customising this workflow Replace Google Gemini with OpenAI or Claude if preferred. Add sentiment analysis on the transcript using an LLM. Modify the Sheet column structure to add additional fields. Add a filter node to skip candidates with incomplete phone numbers. Use a Webhook trigger instead of Google Sheets to integrate with job portals or ATS.
by Jimleuk
This template is for Self-Hosted N8N Instances only. This n8n demonstrates how to build a simple SQLite MCP server to perform local database operations as well as use it for Business Intelligence. This MCP example is based off an official MCP reference implementation which can be found here -https://github.com/modelcontextprotocol/servers/tree/main/src/sqlite How it works A MCP server trigger is used and connected to 5 tools: 2 Code Node and 3 Custom Workflow. The 2 Code Node tools use the SQLLite3 library and are simple read-only queries and as such, the Code Node tool can be simply used. The 3 custom workflow tools are used for select, insert and update queries as these are operations which require a bit more discretion. Whilst it may be easier to allow the agent to use raw SQL queries, we may find it a little safer to just allow for the parameters instead. The custom workflow tool allows us to define this restricted schema for tool input which we'll use to construct the SQL statement ourselves. All 3 custom workflow tools trigger the same "Execute workflow" trigger in this very template which has a switch to route the operation to the correct handler. Finally, we use our Code nodes to handle select, insert and update operations. The responses are then sent back to the the MCP client. How to use This SQLite MCP server allows any compatible MCP client to manage a SQLite database by supporting select, create and update operations. You will need to have a SQLite database available before you can use this server. Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop Try the following queries in your MCP client: "Please create a table to store business insights and add the following..." "what business insights do we have on current retail trends?" "Who has contributed the most business insights in the past week?" Requirements SQLite for database. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow If the scope of schemas or tables is too open, try restrict it so the MCP serves a specific purpose for business operations. eg. Confine the querying and editing to HR only tables before providing access to people in that department. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!
by darrell_tw
How it works Fetch all workflows from your n8n instance. Filter workflows that contain nodes with a modelId setting. Extract the node names, model IDs, model names, workflow names, and workflow URLs. Save the extracted information into a connected Google Sheet. Set up steps Connect your n8n API credentials. Connect your Google Sheets account. Replace "Your n8n domain" with your actual domain URL. Use this Google Sheet template to create a new sheet for results. Setup typically takes 5 minutes. Be cautious: if you have over 100 workflows, performance may be impacted. Notes Sticky notes inside the workflow provide extra guidance. This workflow clears old sheet data before writing new results. Make sure your n8n instance allows API access. Result Example Update: It didn't detect the AI model in tool originally. Now it's fixed! Update 20250429: Support 1.91.0 with open node directly! Optimize the url with node id.
by LukaszB
Crypto Price Alert – n8n Workflow A simple and effective crypto alert system for anyone who wants to stay up to date with coin price changes — without refreshing charts all day. This workflow checks the current price of your chosen cryptocurrency (via CoinGecko) and sends you an alert on Discord if it goes above or below your target range. It’s lightweight, easy to set up, and runs on autopilot. What the Workflow Does Checks the live price of a selected coin using the CoinGecko API. Compares it to the max/min prices you define manually. Decides if the price is too high or too low. Sends an alert message to Discord depending on the result. How It Works The flow is triggered manually or on a schedule (your choice). It pulls the current price of the coin you set. Compares that price with your min and max values. Sends a “high” or “low” message to your Discord webhook. Setup Steps Enter your coin ID and price thresholds in the “Set Low and High” node. Paste your Discord webhook URLs in the "Message High" and "Message Low" nodes. Optional: Adjust the schedule trigger to run every X minutes/hours. Run once manually to test — takes under 1 minutes. Full instructions and config tips are in sticky notes inside the workflow.