by Jimleuk
This n8n demonstrates how to build a simple Google Drive MCP server to search and get contents of files from Google Drive. This MCP example is based off an official MCP reference implementation which can be found here -https://github.com/modelcontextprotocol/servers/tree/main/src/gdrive How it works A MCP server trigger is used and connected to 1x Google Drive tool and 1x Custom Workflow tool. The Google Drive tool is set to perform a search on files within our Google Drive folder. The Custom Workflow tool downloads target files found in our drive and converts the binaries to their text representation. Eg. PDFs have only their text contents extracted and returned to the MCP client. How to use This Google Drive MCP server allows any compatible MCP client to manage a person or shared Google Drive. Simple select a drive or for better control, specify a folder within the drive to scope the operations to. Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop Try the following queries in your MCP client: "Please help me search for last month's expense reports." "What does the company policy document say about cancellations and refunds?" Requirements Google Drive for documents. OpenAI for image and audio understanding. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow Add additional capabilities such as renaming, moving and/or deleting files. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!
by Airtop
Recursive Web Scraping Use Case Automating web scraping with recursive depth is ideal for collecting content across multiple linked pagesโperfect for content aggregation, lead generation, or research projects. What This Automation Does This automation reads a list of URLs from a Google Sheet, scrapes each page, stores the content in a document, and adds newly discovered links back to the sheet. It continues this process for a specified number of iterations based on the defined scraping depth. Input Parameters: Seed URL: The starting URL to begin the scraping process. Example: https://example.com/ Links must contain: Restricts the links to those that contain this specified string. Example: https://example.com/ Depth: The number of iterations (layers of links) to scrape beyond the initial set. Example: 3 How It Works Starts by reading the Seed URL from the Google Sheet. Scrapes each page and saves its content to the specified document. Extracts new links from each page that match the Links must contain string, appends them to the Google Sheet. Repeats steps 2โ3 for the number of times specified by Depth - 1. Setup Requirements Airtop API Key โ free to generate. Credentials set up for Google Docs (requires creating a project on Google Console). Read how to. Credentials set up for Google Spreadsheet. Next Steps Add Filtering Rules**: Filter which links to follow based on domain, path, or content type. Combine with Scheduler**: Run this automation on a schedule to continuously explore newly discovered pages. Export Structured Data**: Extend the process to store extracted data in a CSV or database for analysis. Read more about website scraping for LLMS
by Akash Kankariya
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. ๐ฏ Overview This n8n workflow template automates the process of monitoring Instagram comments and sending predefined responses based on specific comment keywords. It integrates Instagram's Graph API with Google Sheets to manage comment responses and maintains an interaction log for customer relationship management (CRM) purposes. ๐ง Workflow Components The workflow consists of 9 main nodes organized into two primary sections: ๐ก Section 1: Webhook Verification โ Get Verification (Webhook node) ๐ Respond to Verification Message (Respond to Webhook node) ๐ค Section 2: Auto Comment Response ๐ฌ Insta Update (Webhook node) โ Check if update is of comment? (Switch node) ๐ค Comment if of other user (If node) ๐ Comment List (Google Sheets node) ๐ฌ Send Message for Comment (HTTP Request node) ๐ Add Interaction in Sheet (CRM) (Google Sheets node) ๐ ๏ธ Prerequisites and Setup Requirements 1. ๐ต Meta/Facebook Developer Setup ๐ฑ Create Facebook App > ๐ Action Items: > - [ ] Navigate to Facebook Developers > - [ ] Click "Create App" and select "Business" type > - [ ] Configure the following products: > - โ Instagram Graph API > - โ Facebook Login for Business > - โ Webhooks ๐ Required Permissions Configure the following permissions in your Meta app: | instagram_basic | ๐ Read Instagram account profile info and media | instagram_manage_comments | ๐ฌ Create, delete, and manage comments | instagram_manage_messages | ๐ค Send and receive Instagram messages | pages_show_list | ๐ Access connected Facebook pages ๐ซ Access Token Generation > โ ๏ธ Important Setup:+ > - [ ] Use Facebook's Graph API Explorer > - [ ] Generate a User Access Token with required permissions > - [ ] โก Important: Tokens expire periodically and need refreshing 2. ๐ Webhook Configuration ๐ Setup Webhook URL > ๐ Configuration Checklist: > - [ ] In Meta App Dashboard, navigate to Products โ Webhooks > - [ ] Subscribe to Instagram object > - [ ] Configure webhook URL: your-n8n-domain/webhook/instagram > - [ ] Set verification token (use "test" or create secure token) > - [ ] Select webhook fields: > - โ comments - For comment notifications > - โ messages - For DM notifications (if needed) โ Webhook Verification Process The workflow handles Meta's webhook verification automatically: ๐ก Meta sends GET request with hub.challenge parameter ๐ Workflow responds with the challenge value to confirm subscription 3. ๐ Google Sheets Setup Example - https://docs.google.com/spreadsheets/d/1ONPKJZOpQTSxbasVcCB7oBjbZcCyAm9gZ-UNPoXM21A/edit?usp=sharing ๐ Create Response Management Sheet Set up a Google Sheets document with the following structure: ๐ Sheet 1 - Comment Responses: | Column | Description | Example | |--------|-------------|---------| | ๐ฌ Comment | Trigger keywords | "auto", "info", "help" | | ๐ Message | Corresponding response message | "Thanks for your comment! We'll get back to you soon." | ๐ Sheet 2 - Interaction Log: | Column | Description | Purpose | |--------|-------------|---------| | โฐ Time | Timestamp of interaction | Track when interactions occur | | ๐ User Id | Instagram user ID | Identify unique users | | ๐ค Username | Instagram username | Human-readable identification | | ๐ Note | Additional notes or error messages | Debugging and analytics | ๐ง Built By - akash@codescale.tech
by David Olusola
n8n Set Node Tutorial - Complete Guide ๐ฏ How It Works This tutorial workflow teaches you everything about n8n's Set node through hands-on examples. The Set node is one of the most powerful tools in n8n - it allows you to create, modify, and transform data as it flows through your workflow. What makes this tutorial special: Progressive Learning**: Starts simple, builds to complex concepts Interactive Examples**: Real working nodes you can modify and test Visual Guidance**: Sticky notes explain every concept Branching Logic**: Shows how Set nodes work in different workflow paths Real Data**: Uses practical examples you'll encounter in automation The workflow demonstrates 6 core concepts: Basic data types (strings, numbers, booleans) Expression syntax with {{ }} and $json references Complex data structures (objects and arrays) "Keep Only Set" option for clean outputs Conditional data setting with branching logic Data transformation and aggregation techniques ๐ Setup Steps Step 1: Import the Workflow Copy the JSON from the code artifact above Open your n8n instance in your browser Navigate to Workflows section Click "Import from JSON" or the import button (usually a "+" or import icon) Paste the JSON into the import dialog Click "Import" to load the workflow Save the workflow (Ctrl+S or click Save button) Step 2: Choose Your Starting Point Option A: Default Tutorial Mode (Recommended for beginners) The workflow is ready to run as-is Uses simple "Welcome" message as starting data Click "Execute Workflow"** to begin Option B: Rich Test Data Mode (Recommended for experimentation) Locate the nodes: Find "Start (Manual Trigger)" and "0. Test Data Input" Disconnect default: Click the connection line between "Start (Manual Trigger)" โ "1. Set Basic Values" and delete it Connect test data: Drag from "0. Test Data Input" output to "1. Set Basic Values" input Execute: Click "Execute Workflow" to run with rich test data Step 3: Execute and Learn Run the workflow: Click the "Execute Workflow" button Check outputs: Click on each node to see its output data Read the notes: Each sticky note explains what's happening Follow the flow: Data flows from left to right, top to bottom Step 4: Experiment and Modify Try These Experiments: ๐ง Change Basic Values: Click on "1. Set Basic Values" Modify user_age (try 20 vs 35) Change user_name to see how it propagates Execute and see the changes flow through ๐ Test Conditional Logic: Set user_age to 20 โ triggers "Student Discount" path Set user_age to 30 โ triggers "Premium Access" path Watch how the workflow branches differently ๐จ Modify Expressions: In "2. Set with Expressions", try changing: ={{ $json.score * 2 }} to ={{ $json.score * 3 }} ={{ $json.user_name }} Smith to ={{ $json.user_name }} Johnson ๐๏ธ Complex Data Structures: In "3. Set Complex Data", modify the JSON structure Add new properties to the user_profile object Try nested expressions ๐ Learning Path Beginner Level (Nodes 1-2) Focus**: Understanding basic Set operations Learn**: Data types, static values, simple expressions Time**: 10-15 minutes Intermediate Level (Nodes 3-4) Focus**: Complex data and output control Learn**: Objects, arrays, "Keep Only Set" option Time**: 15-20 minutes Advanced Level (Nodes 5-6) Focus**: Conditional logic and data aggregation Learn**: Branching workflows, merging data, complex expressions Time**: 20-25 minutes ๐ What Each Node Teaches | Node | Concept | Key Learning | |------|---------|-------------| | 1. Set Basic Values | Data Types | String, number, boolean basics | | 2. Set with Expressions | Dynamic Data | {{ }} syntax, $json references, $now functions | | 3. Set Complex Data | Advanced Structures | Objects, arrays, nested properties | | 4. Set Clean Output | Data Management | "Keep Only Set" for clean final outputs | | 5a/5b. Conditional Sets | Branching Logic | Different data based on conditions | | 6. Tutorial Summary | Data Aggregation | Combining and summarizing workflow data | ๐ก Pro Tips ๐ Quick Wins: Always check node outputs after execution Use sticky notes as your learning guide Experiment with small changes first Copy nodes to try variations ๐ ๏ธ Advanced Techniques: Use Keep Only Set for API responses Combine static and dynamic data in complex objects Leverage conditional paths for different user types Reference nested object properties with dot notation ๐ Troubleshooting: If expressions don't work, check the {{ }} syntax Ensure field names match exactly (case-sensitive) Use the expression editor for complex logic Check data types match your expectations ๐ฏ Next Steps After Tutorial Create your own Set nodes in a new workflow Practice with real data from APIs or databases Build data transformation workflows for your specific use cases Combine Set nodes with other n8n nodes like HTTP, Webhook, etc. Explore advanced expressions using JavaScript functions Congratulations! You now have the foundation to use Set nodes effectively in any n8n workflow. The Set node is truly the "Swiss Army knife" of n8n automation! ๐ ๏ธ
by Taiki
Workflow Setup Guide This workflow collects the most-viewed videos from specified YouTube channels and saves the data to a Google Sheet. Follow these steps to set it up: 1. Credentials Setup Google Sheets:** You need to have a Google Sheets credential configured in your n8n instance. If you don't have one, go to the 'Credentials' section in n8n and add a new credential for Google Sheets. YouTube API Key:** You need a YouTube Data API v3 key. Go to the Google Cloud Console. Create a new project or select an existing one. Go to 'APIs & Services' > 'Library' and enable the YouTube Data API v3. Go to 'APIs & Services' > 'Credentials', click 'Create Credentials', and choose 'API key'. Copy the generated API key. 2. Google Sheet Setup You will need one Google Sheet with two separate sheets (tabs) inside it. Input Sheet Use Template This sheet provides the list of YouTube channels to process. Required Columns:** Create a sheet with the following two columns: ChannelID: The ID of the YouTube channel (e.g., T7M3PpjBZzw). video_num_to_get: The number of top videos to retrieve for that channel (e.g., 5). Output Sheet This sheet is where the results will be saved. Required Columns:** The workflow will automatically append data to the following columns. You can create them beforehand or let the workflow do it. channelName title videoId videoLink 3. Node Configuration Read Channel Info from Sheet:** Select your Google Sheets credential. Enter your Spreadsheet ID. Enter the name of your Input Sheet. Fetch Most-Viewed Videos via YouTube API:** Replace YOUR_YOUTUBE_API_KEY with the API key you generated in Step 1. Append Video Details to Sheet:** Select your Google Sheets credential. Enter your Spreadsheet ID (the same one as before). Enter the name of your Output Sheet.
by Rex Lui
Easily generate QR codes from any URL! This workflow lets users submit a URL via a simple form and instantly receive a downloadable QR code imageโperfect for quick sharing or promotions. Setup is fast and user-friendly, so youโll be up and running in minutes! ๐ How it works The end user submits a URL through a simple online form. The workflow automatically sends the submitted URL to a QR code generation API. The user receives a downloadable QR code image corresponding to their URL. โ๏ธ Setup instruction Import Workflow: Click "Import from JSON" in your n8n environment and paste the provided workflow JSON. Click "Save" and activate the workflow. Double click the "On form submission" node to obtain the production URL. You may now use this URL to do QR code generation.
by Aitor | 1Node
Elevate your Stripe workflows with an AI agent that intelligently, securely, and interactively handles essential Stripe data operations. Leveraging the Kimi K2 model via OpenRouter, this n8n template enables safe data retrieval. From fetching summarized financial insights to managing customer discounts, while strictly enforcing privacy, concise outputs, and operational boundaries. ๐งพ Requirements Stripe: Active Stripe account API key with read and write access. n8n: Deployed n8n instance (cloud or self-hosted) OpenRouter: Active OpenRouter account with credit API key from OpenRouter ๐ Useful Links Stripe n8n Stripe Credentials Setup OpenRouter ๐ฆ Workflow Breakdown Trigger: User Request Workflow initiates when an authenticated user sends a message in the chat trigger. AI Agent (Kimi K2 OpenRouter): Intent Analysis Determines whether the user wants to: List customers, charges, or coupons Retrieve the accountโs balance Create a new coupon in Stripe Filters unsupported or unclear requests, explaining permissions or terminology as needed. Stripe Data Retrieval For data queries: Only returns summarized, masked lists (e.g., last 10 transactions/customers) Sensitive details, such as card numbers, are automatically masked or truncated Never exposes or logs confidential information Coupon Creation When a coupon creation is requested: AI agent collects coupon parameters (discount, expiration, restrictions) Clearly summarizes the action and requires explicit user confirmation before proceeding Creates the coupon upon confirmation and replies with only the public-safe coupon details ๐ก๏ธ Privacy & Security No data storage:** All responses are ephemeral; sensitive Stripe data is never retained. Strict minimization:** Outputs are tightly scoped; only partial identifiers are shown and only when necessary. Retention rules enforced:** No logs, exports, or secondary storage of Stripe data. Confirmation required:** Actions modifying Stripe (like coupon creation) always require the user to approve before execution. Compliance-ready:** Aligned with Stripe and general data protection standards. โฑ๏ธ Setup Steps Setup time: 10โ15 minutes Add Stripe API credentials in n8n Add the OpenRouter API credentials in n8n and select your desired AI model to run the agent. In our template we selected Kimi K2 from Moonshot AI. โ Summary This workflow template connects a privacy-prioritized AI agent (Kimi K2 via OpenRouter) with your Stripe account to enable: Fast, summarized access to customer, transaction, coupon, and balance data Secure, confirmed creation of discounts/coupons Complete adherence to authorization, privacy, and operational best practices ๐โโ๏ธ Need Help? Feel free to contact us at 1 Node Get instant access to a library of free resources we created.
by Adam Bertram
An AI-powered chat assistant that analyzes Azure virtual machine activity and generates detailed timeline reports showing VM state changes, performance metrics, and operational events over time. How It Works The workflow starts with a chat trigger that accepts user queries about Azure VM analysis. A Google Gemini AI agent processes these requests and uses six specialized tools to gather comprehensive VM data from Azure APIs. The agent queries resource groups, retrieves VM configurations and instance views, pulls performance metrics (CPU, network, disk I/O), and collects activity log events. It then analyzes this data to create timeline reports showing what happened to VMs during specified periods, defaulting to the last 90 days unless the user specifies otherwise. Prerequisites To use this template, you'll need: n8n instance (cloud or self-hosted) Azure subscription with virtual machines Microsoft Azure Monitor OAuth2 API credentials Google Gemini API credentials Proper Azure permissions to read VM data and activity logs Setup Instructions Import the template into n8n. Configure credentials: Add Microsoft Azure Monitor OAuth2 API credentials with read permissions for VMs and activity logs Add Google Gemini API credentials Update workflow parameters: Open the "Set Common Variables" node Replace <your azure subscription id here> with your actual Azure subscription ID Configure triggers: The chat trigger will automatically generate a webhook URL for receiving chat messages No additional trigger configuration needed Test the setup to ensure it works. Security Considerations Use minimum required Azure permissions (Reader role on subscription or resource groups). Store API credentials securely in n8n credential store. The Azure Monitor API has rate limits, so avoid excessive concurrent requests. Chat sessions use session-based memory that persists during conversations but doesn't retain data between separate chat sessions. Extending the Template You can add more Azure monitoring tools like disk metrics, network security group logs, or Application Insights data. The AI agent can be enhanced with additional tools for Azure cost analysis, security recommendations, or automated remediation actions. You could also integrate with alerting systems or export reports to external storage or reporting platforms.
by Dvir Sharon
๐ Extract Competitor SERP Rankings from Google Search to Sheets with Bright Data This template requires a self-hosted n8n instance to run. A comprehensive n8n automation that extracts competitor data from Google search results for specific keywords and target countries, automatically saving structured data to Google Sheets for competitive analysis and market research. ๐ Overview This workflow provides a professional competitor analysis solution that identifies ranking websites for specific search terms across different countries. Perfect for SEO research, competitive intelligence, market analysis, and content strategy planning. The system uses Bright Data's SERP API for accurate search result extraction and advanced HTML parsing for detailed competitor information. Who is this for? SEO professionals conducting competitive analysis Digital marketers researching market landscapes Business analysts studying competitor positioning Content strategists analyzing competitor content approaches Market researchers tracking competitive intelligence across regions What problem is this workflow solving? Extracting competitor data from Google search results Processing multiple keywords across different countries Organizing results in a structured, analyzable format Eliminating manual copy-paste work Ensuring consistent data collection methodology What this workflow does Manual Trigger: Starts the workflow execution Get Keywords from Sheet: Fetches keywords and target countries from Google Sheets URL Encode Keywords: Converts keywords to URL-safe format Process Keywords in Batches: Handles multiple keywords sequentially Fetch Google Search Results: Uses Bright Data SERP API to scrape HTML Extract Competitor Data from HTML: Parses HTML to extract competitor details Save Competitor Results to Sheet: Stores structured data in Google Sheets Wait to Avoid Rate Limits: Implements 30-second delays between requests Output Data Points | Field | Description | Example | | :--------------- | :--------------------------------- | :------------------------------------------ | | Keyword | Original search term | digital marketing services | | Target Country | Geographic target | US | | websiteName | Domain/company name | hubspot | | websiteUrl | Complete website URL | https://www.hubspot.com/marketing | | websiteTitle | Page title from search results | Digital Marketing Software & Tools | | websiteDescription | Meta description/snippet | Grow your business with HubSpot's digital marketing tools... | โ๏ธ Setup Prerequisites n8n instance (self-hosted) Google account with Sheets access Bright Data account with SERP API access Google Sheet Structure This workflow utilizes two Google Sheets: one for input keywords and one for outputting competitor data. Input Sheet: "Keywords" This sheet should contain the keywords and target countries for your search queries. | Column Header | Data Type | Description | Example | | :------------- | :-------- | :------------------------------------------------- | :-------------- | | Keyword | Text | The search term you want to analyze. | digital marketing | | Country | Text | The 2-letter ISO country code for the target region of the search (e.g., US, GB, DE). | US | Output Sheet: "Competitor Results" This sheet will be populated automatically by the workflow with the extracted competitor data. | Column Header | Data Type | Description | Example | | :----------------- | :-------- | :---------------------------------------------------------------------------------- | :----------------------------------------------- | | Keyword | Text | The original search term used for the query. | digital marketing services | | Target Country | Text | The 2-letter ISO country code of the search results. | US | | websiteName | Text | The name of the website or domain found in the search results. | hubspot | | websiteUrl | URL | The full URL of the website or page found in the search results. | https://www.hubspot.com/marketing | | websiteTitle | Text | The title of the page as displayed in the Google search results. | Digital Marketing Software & Tools | | websiteDescription | Text | The meta description or snippet text displayed under the title in search results. | Grow your business with HubSpot's digital marketing tools... | Step-by-Step Setup Import the Workflow: Copy JSON โ n8n โ Workflows โ + Add โ Import from JSON Configure Bright Data Credentials: Credential Type: HTTP Header Auth Header Name: Authorization Header Value: Bearer YOUR_API_TOKEN Configure Google Sheets: Create two new Google Sheets as described above: one named "Keywords" (for input) and one named "Competitor Results" (for output). Set up Google Sheets OAuth2 credentials within n8n. Update Workflow Settings: Replace placeholders: YOUR_GOOGLE_SHEET_ID (for both input and output sheets), YOUR_BRIGHTDATA_CREDENTIAL_ID. Ensure correct sheet/tab names are selected in the Google Sheets nodes. Test & Activate: Add test data to your "Keywords" sheet โ Execute workflow โ Verify output in your "Competitor Results" sheet. ๐ How to Customize Add More Data Points:** Modify the JavaScript code in the "Extract Competitor Data from HTML" node to parse and extract additional information from the HTML. Custom Filtering:** Implement logic to exclude specific domains, filter results by title length, or other criteria. Expand Geographic Coverage:** Add more 2-letter ISO country codes to the Bright Data SERP API call to broaden your competitive analysis. Batch Processing:** Adjust the settings in the "Process Keywords in Batches" node to optimize for your Bright Data plan and desired execution speed. Rate Limiting:** Modify the "Wait" node (default: 30 seconds) to increase or decrease the delay between requests based on API limits or performance needs. ๐ Use Cases & Examples SEO Competitive Analysis:** Identify top-ranking competitors for your target keywords and analyze their strategies. Market Entry Research:** Understand the competitive landscape in new geographic regions before expanding. Content Strategy Planning:** Analyze competitor page titles and meta descriptions for inspiration and to identify content gaps. International Market Research:** Compare search engine results and competitor positioning across different countries. ๐ Performance & Limits Single Keyword:** 30โ60 seconds per keyword. Batch of 10 Keywords:** Typically takes 5โ10 minutes. Large Lists (50+ Keywords):** Expect execution times of 30โ60 minutes or more, depending on batching and rate limits. Success Rate:** Generally 95%+ for data extraction. Data Accuracy:** Typically 98%+ for extracted fields. API Calls:** 1 Bright Data SERP API call per keyword, plus multiple Google Sheets writes per execution. Rate Limit:** A 30-second delay between requests is recommended to prevent exceeding API limits. ๐งฐ Troubleshooting Bright Data API error:** Double-check your API token, ensure you have sufficient credits, and confirm SERP API access is enabled on your Bright Data account. No keywords found:** Verify the Google Sheet ID and ensure the column headers in your "Keywords" sheet precisely match the specifications (e.g., "Keyword", "Country"). Google Sheets permission denied:** Re-authenticate your Google Sheets credentials within n8n and check that the correct sharing settings are applied to your sheets. No results extracted:** Review the JavaScript parsing logic in the "Extract Competitor Data from HTML" node. Also, verify the validity of your keywords and target countries. Loop not processing all:** Check the batch settings in the "Process Keywords in Batches" node and ensure all connections within the loop are correctly configured. ๐ค Support & Community n8n Forum:** <https://community.n8n.io> n8n Docs:** <https://docs.n8n.io> Bright Data Support:** Access support directly via your Bright Data dashboard. GitHub Issues:** Report any bugs or suggest new features on the n8n GitHub repository. ๐ฏ Final Notes This workflow provides a comprehensive foundation for competitor research and market analysis. Customize it to fit your specific industry needs and competitive intelligence requirements. Please note that this template uses Community Nodes. Ensure you understand the risks before using community nodes.
by Sherlockes
What this template is made for: I have a personal Telegram channel and a bot inside it where I save interesting links that I want to save or read later. The idea is that n8n will take care of reading the new links added to this channel and send them, through the corresponding API, to the Hoarder and Readeck installations. How it works Since my server where n8n runs is not always on, a "Schedule Trigger" will be responsible for checking every so often if there is any new content in the Telegram channel where I store the links. This request is made through "http request" and the Telegram API. Next, a code block is responsible for filtering out everything that is not a hyperlink. At this point, the flow splits into two so that parallel and similar processes are performed for Hoarder and Readeck. The corresponding API is accessed to get a list of all the links saved in the corresponding service. A code block is responsible for filtering the list of hyperlinks previously obtained from Telegram so that only those that are not already saved in the service continue. Finally, another "Http Request" node is responsible for using the service API to save the link in the corresponding service. Configuration instructions The template makes use of the environment variables that I have declared in the n8n "docker-compose.yml" file through an external ".env" file. These are the variables I use: Telegram Bot Token Sherlink TG_SHERLINK_BOT_TOKEN=XXXXXXXX:XXXXXXXXXXXXXXXX Id Telegram Channel Sherlink TG_SHERLINK_ID=-XXXXXXXXXXXXX Readeck server READECK_SERVER=http://readeck.midomain.com READECK_API_KEY=xxxxxxxxxxxxx Hoarder server HOARDER_SERVER=http://hoarder.midomain.com HOARDER_API_KEY=xxxxxxxxxxxxxx Created in 1.85.4 n8n version
by Jimleuk
Note: This template only works for self-hosted n8n. This n8n template demonstrates how to use the Langchain code node to track token usage and cost for every LLM call. This is useful if your templates handle multiple clients or customers and you need a cheap and easy way to capture how much of your AI credits they are using. How it works In our mock AI service, we're offering a data conversion API to convert Resume PDFs into JSON documents. A form trigger is used to allow for PDF upload and the file is parsed using the Extract from File node. An Edit Fields node is used to capture additional variables to send to our log. Next, we use the Information Extractor node to organise the Resume data into the given JSON schema. The LLM subnode attached to the Information Extractor is a custom one we've built using the Langchain Code node. With our custom LLM subnode, we're able to capture the usage metadata using lifecycle hooks. We've also attached a Google Sheet tool to our LLM subnode, allowing us to send our usage metadata to a google sheet. Finally, we demonstrate how you can aggregate from the google sheet to understand how much AI tokens/costs your clients are liable for. Check out the example Client Usage Log - https://docs.google.com/spreadsheets/d/1AR5mrxz2S6PjAKVM0edNG-YVEc6zKL7aUxHxVcffnlw/edit?usp=sharing How to use SELF-HOSTED N8N ONLY** - the Langchain Code node is only available in the self-hosted version of n8n. It is not available in n8n cloud. The LLM subnode can only be attached to non-"AI agent" nodes; Basic LLM node, Information Extractor, Question & Answer Chain, Sentiment Analysis, Summarization Chain and Text Classifier. Requirements Self-hosted version of n8n OpenAI for LLM Google Sheets to store usage metadata Customising this template Bring the custom LLM subnode into your own templates! In many cases, it can be a drop-in replacement for the regular OpenAI subnode. Not using Google Sheets? Try other databases or a HTTP call to pipe into your CRM.
by LukaszB
Crypto Price Alert โ n8n Workflow A simple and effective crypto alert system for anyone who wants to stay up to date with coin price changes โ without refreshing charts all day. This workflow checks the current price of your chosen cryptocurrency (via CoinGecko) and sends you an alert on Discord if it goes above or below your target range. Itโs lightweight, easy to set up, and runs on autopilot. What the Workflow Does Checks the live price of a selected coin using the CoinGecko API. Compares it to the max/min prices you define manually. Decides if the price is too high or too low. Sends an alert message to Discord depending on the result. How It Works The flow is triggered manually or on a schedule (your choice). It pulls the current price of the coin you set. Compares that price with your min and max values. Sends a โhighโ or โlowโ message to your Discord webhook. Setup Steps Enter your coin ID and price thresholds in the โSet Low and Highโ node. Paste your Discord webhook URLs in the "Message High" and "Message Low" nodes. Optional: Adjust the schedule trigger to run every X minutes/hours. Run once manually to test โ takes under 1 minutes. Full instructions and config tips are in sticky notes inside the workflow.