by FORK SOFTWARE TECHNOLOGIES INC.
Overview This n8n workflow is specifically designed to monitor the USDT ERC-20 balance within a specific wallet. It uses Etherscan's public blockchain database, which does not require API authentication, to periodically check and process transaction data. This workflow is ideal for users who need an automated solution to track ERC-20 wallet transactions. Features Automatic Monitoring**: Executes every 5 minutes to capture new transactions. Customizable Filters**: Customize tracking based on parameters like transaction duration and wallet addresses. Data Aggregation**: Compiles transaction data into a single, structured list. Formatted Outputs**: Presents processed data in an organized format. Telegram Tracking**: Tracks wallet balances via Telegram notifications using the bot. Requirements n8n Setup**: Requires a self-hosted or cloud-based n8n instance. Basic Understanding**: Basic knowledge of n8n workflows and nodes. Installation and Configuration Import Workflow: Load the provided JSON workflow into your n8n instance. Configure the User Data Node: Enter your ERC-20 Wallet Address in the 'Your Wallet Address' field. Enter your Etherscan API Key in the “Your Etherscan API Key” field. Enter your USDT ERC-20 Contract Address in the "Your ERC-20 USDT Contract Address" field (0xdAC17F958D2ee523a2206206994597C13D831ec7). You can also monitor another token by entering a different contract address. Configure the Telegram Node: Go to Telegram and search for "BotFather". Select /newbot from the BotFather menu to create your bot. Get the API key BotFather provides. Go to Telegram and search for "Get My ChatID". Start the conversation and get your ChatID. Use this information to configure the Telegram Node. Schedule Trigger Node: By default, the workflow is triggered every 5 minutes. Adjust this according to your needs. Test the Workflow: Execute the workflow manually to ensure everything is working as expected. How It Works Schedule Trigger: Starts the workflow at predetermined intervals. Edit Fields: Sets the wallet address, Etherscan API key, and USDT ERC-20 token address. Edit Telegram Settings: Create a bot via BotFather. Configure the API key and Telegram Chat ID. Etherscan Data Import: Collects transaction data from the ERC-20 wallet using Etherscan's public database. Final Results: Organizes and formats the transaction data for review. Telegram Bot Message Sending: If there is a balance change, it sends a formatted message about the balance change. If there is no balance change, it sends a message that your balance has not changed. You can configure it to avoid sending a message when there is no change.
by Luke
Built this for a dedicated Slack outage-notifications channel — works well on both desktop and mobile. This is for: IT Administrators & small MSPs looking to streamline M365 alerts from one or multiple mailboxes into a single or specific Slack channels IT Admins who prefer ChatOps over management-by-email What does it do Scans for M365 outage alerts emails (every 1 min) Checks if it impacts a specific user region (if the alert calls it out, countries have to be manually set) Summarizes the incident using OpenAI o4-mini (cheap model - or you can swap for local Ollama) Sends a Slack Block to your outage channel with incident link (can be extended) Deletes the original alert email after successful delivery Credentials Outlook: Create an Outlook credential (OAuth2.0) to point to the mailbox (regular or shared) where M365 service alerts will be received Slack: Create a Slack bot credential with access to the slack channel you want updates posted to OpenAI: Create a OpenAI credential that has access to the GPT-4O-MINI model. Recommend you use projects in OpenAI so that you may set a per-project-budget and not impact other projects. Review this OpenAI documentation for more info on managing Projects in the API portal. Expect this to consume no more than 1-2 cents per month on average. Setup Download & import the workflow Modify the first Outlook block (Check for 365 Service Alert) to use the Outlook credential Modify the OpenAI block's system prompt to call out the countries your users reside in ie. "- Assume the organization has users primarily in the U.S. and Australia. If those regions are affected, state: "Your users may have been affected." Otherwise, add: "No impact expected for your user base."" ← swap U.S. & Australia for desired countries Modify the Slack block (Post outage to Slack) to specify the channel updates will be posted to Sample Slack Output Workflow Diagram
by Aditya Gaur
Who is this template for? This template is designed for developers, DevOps engineers, and automation enthusiasts who want to streamline their GitLab merge request process using n8n, a low-code workflow automation tool. It eliminates manual intervention by automating the merging of GitLab branches through API calls. How it works ? Trigger the workflow: The workflow can be triggered by a webhook, a scheduled event, or a GitLab event (e.g., a new merge request is created or approved). Fetch Merge Request Details: n8n makes an API call to GitLab to retrieve merge request details. Check Merge Conditions: The workflow validates whether the merge request meets predefined conditions (e.g., approvals met, CI/CD pipelines passed). Perform the Merge: If all conditions are met, n8n sends a request to the GitLab API to merge the branch automatically. Setup Steps 1. Prerequisites An n8n instance (Self-hosted or Cloud) A GitLab personal access token with API access A GitLab repository with merge requests enabled 2. Create the n8n Workflow Set up a trigger: Choose a trigger node (Webhook, Cron, or GitLab Trigger). Fetch merge request details: Add an HTTP Request node to call GET /merge_requests/:id from GitLab API. Validate conditions: Check if the merge request has necessary approvals. Ensure CI/CD pipelines have passed. Merge the request: Use an HTTP Request node to call PUT /merge_requests/:id/merge API. 3. Test the Workflow Create a test merge request. Check if the workflow triggers and merges automatically. Debug using n8n logs if needed. 4. Deploy and Monitor Deploy the workflow in production. Use n8n’s monitoring features to track execution. This template enables seamless GitLab merge automation, improving efficiency and reducing manual work! Note: Never hard code API token or secret in your https request.
by Ranjan Dailata
Who is this for? The Capture Website Screenshots with Bright Data Web Unlocker and Save to Disk workflow is built for automation professionals and developers who need reliable, high-quality screenshots from any website even those protected by anti-bot technologies. It is ideal for: Compliance Teams - Capturing visual records of web content for legal or audit purposes. Product Managers - Tracking visual changes across competitor landing pages. Digital Marketers - Archiving campaign pages and offer variations. Developers and QA Teams - Validating UI deployments or rendering issues. Growth Hackers and Scrapers - Who need to bypass bot protection and capture visual snapshots of restricted content. What problem is this workflow solving? Websites today are highly protected with anti-bot tools like Cloudflare, bot detection scripts, and geo-restrictions. These protections often break traditional screenshot tools or prevent headless browsers from accessing content. This workflow solves the following problems: Bypasses anti-bot defenses using Bright Data Web Unlocker. Automatically captures screenshots without manual browser steps. Stores images locally for easy access or reporting. Operates headlessly and at scale, perfect for automations or scheduled jobs. What this workflow does Sets the target URL, file name, and Bright Data zone name using the Set URL, Filename and Bright Data Zone node. Sends an HTTP POST request to Bright Data Web Unlocker API to capture a screenshot. Saves the screenshot image (.png) to a specified disk location using the Write a file to disk node. Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. Ensure the URL, file name, and Bright Data zone name are correctly set in the Set URL, Filename and Bright Data Zone node. Set the desired local path in the Write a file to disk node to save the screenshot. How to customize this workflow to your needs Change the target URL: Modify the value in the **Set URL, Filename and Bright Data Zone node to capture different websites. Set dynamic filenames**: Use expressions in n8n to generate filenames based on date/time or URL. Specify custom save paths: Adjust the path in the **Write a file to disk node to store screenshots in your preferred directory. Enhance with notifications**: Add additional nodes to send alerts or log activity after each screenshot is taken. Integrate with external systems**: Send screenshots to cloud storage (e.g: AWS S3, Google Drive) or link into monitoring/reporting tools.
by Keith Rumjahn
WordPress Post Auto-Categorization Workflow 💡 Click here to read detailed case study 📺 Click here to watch youtube tutorial 🎯 Purpose Automatically categorize WordPress blog posts using AI, saving hours of manual work. This workflow analyzes your post titles and assigns them to predefined categories using artificial intelligence. 🔄 What This Workflow Does • Connects to your WordPress site • Retrieves all uncategorized posts • Uses AI to analyze post titles • Automatically assigns appropriate category IDs • Updates posts with new categories • Processes dozens of posts in minutes ⚙️ Setup Requirements WordPress site with admin access Predefined categories in WordPress OpenAI API credentials (or your preferred AI provider) n8n with WordPress credentials 🛠️ Configuration Steps Add your WordPress categories (manually in WordPress) Note down category IDs Update the AI prompt with your category IDs Configure WordPress credentials in n8n Set up AI API connection 🔧 Customization Options • Modify AI prompts for different categorization criteria • Adjust for multiple category assignments • Add tag generation functionality • Customize for different content types • Add additional metadata updates ⚠️ Important Notes • Backup your WordPress database before running • Test with a few posts first • Review AI categorization results initially • Categories must be created manually first 🎁 Bonus Features • Can be modified for tag generation • Works with scheduled posts • Handles bulk processing • Maintains categorization consistency Perfect for content managers, bloggers, and website administrators looking to organize their WordPress content efficiently. #n8n #WordPress #ContentManagement #Automation #AI Created by rumjahn
by Batu Öztürk
🚀 Transform LinkedIn Post Reactions into Content Ideas with Airtable 📝 Description This workflow helps you to turn your LinkedIn activity into a powerful content ideation engine. It captures your most recent post reactions on LinkedIn automatically, filters them based on recency, and structures the content into Airtable—ready for brainstorming, inspiration, or publication planning. ⚙️ What It Does Fetches* the latest liked posts from LinkedIn via a public API (rapidapi.com/Real-Time Linkedin Scraper*). Filters** posts to include only those marked as your decided reaction and posted in the last 7 days. Extracts** the post text, author, links and more. Formats** the data into a database-friendly structure. Saves** the output in Airtable for easy tracking, tagging, or team collaboration. 💡 Use Cases Build a content idea vault from posts you admire. Capture inspiration from thought leaders. Identify trends based on what you find insightful. Supercharge your personal brand or newsletter by turning likes into learning. 🛠 Prerequisites Before using this template, make sure you have: ✅ A RapidAPI account and access to the linkedin-api8 endpoint. ✅ Your RapidAPI key and the target LinkedIn username. ✅ An Airtable account with a base/table set up. 🧰 Setup Instructions Clone this template into your n8n instance. Open the Fetch LinkedIn Likes node and enter: Your LinkedIn username. Your RapidAPI key in the headers. Open the Save to Airtable node and: Connect your Airtable account. Link the correct base (Content Hub) and table (Ideas). Set your desired schedule in the Trigger node. Activate the workflow and you're done! 📋 Airtable Setup Create a base called Content Hub and a table named Ideas with the following columns: | Column Name | Type | Required | Notes | |-------------|------------|----------|----------------------------| | Title | Single line text | ✅ | Generated from author info | | Description | Long text | ✅ | Contains post content | | Source | URL | ✅ | Link to the original post | | Type | Single select | ✅ | Value: Linkedin
by Juan Carlos Cavero Gracia
Image Carousel Publisher for Instagram and TikTok Description This automation template is designed for content creators, digital marketers, and social media managers looking to streamline their image carousel posting workflow. It automates the process of uploading multiple images as carousels to Instagram and slideshows to TikTok, making your visual content management more efficient across platforms. Who Is This For? Content Creators & Influencers:** Simplify posting image collections and focus more on creating visual content. Digital Marketers:** Ensure consistent carousel posts across multiple platforms with minimal manual effort. Social Media Managers:** Automate repetitive image uploading tasks and maintain visual engagement. What Problem Does This Workflow Solve? Manually uploading image carousels to different platforms can be time-consuming and inconsistent. This workflow addresses these challenges by: Automating Multi-Image Uploads:** Processes multiple images and prepares them for platform-specific formats. Supporting Cross-Platform Publishing:** Simultaneously posts your image carousels to Instagram and TikTok slideshows. Maintaining Visual Consistency:** Ensures your visual stories remain consistent across platforms. Streamlining Batch Processing:** Handles the technical complexity of multi-image uploads with a single workflow trigger. How It Works Image Selection: Trigger the workflow with your selected images. Image Processing: The workflow automatically processes and prepares your images for both platforms. Content Distribution: Uploads the images as a carousel to Instagram and as a slideshow to TikTok. Platform Optimization: Formats the uploads according to each platform's requirements. Setup API Token Generation: Visit upload-post.com and create an account Navigate to the API settings section Generate a new API token Copy the token for use in the next steps Platform Configuration: In the "Upload to Instagram" node: Paste your API token in the designated field Configure your Instagram account settings Set your preferred posting parameters In the "Upload to TikTok" node: Add the same API token Set up your TikTok account credentials Adjust posting preferences Content Parameters Setup: Rename the "HTTP Request" node to "Social Media Upload Request" Configure your account information: Username Account ID Content title format Posting schedule (if applicable) Image Source Configuration: Set up your image source directory Configure image format requirements Test with sample images before going live About upload-post.com Upload-post.com is a third-party service that acts as a bridge between your workflow and social media platforms. It provides: Secure API endpoints for multi-platform posting Image format validation and optimization Queue management for scheduled posts Analytics and posting status tracking Cross-platform compatibility handling Requirements Accounts:** upload-post.com account with access to Instagram and TikTok publishing. API Keys:** Upload-post.com API token. Images:** Properly formatted images that meet Instagram and TikTok specifications: Instagram: Up to 10 images per carousel, 1:1 to 4:5 aspect ratio TikTok: Compatible with slideshow format, 9:16 aspect ratio recommended Use this template to enhance your visual storytelling, maintain consistency across social platforms, and engage your audience with compelling image carousels and slideshows.
by Aditya Sharma
Description This intelligent n8n automation streamlines the process of collecting, extracting, and scoring resumes sent to a Gmail inbox—making it an ideal solution for recruiters who regularly receive hundreds of applications. The workflow scans incoming emails with attachments, extracts relevant candidate information from resumes using AI, evaluates each candidate based on customizable criteria, and logs their scores alongside contact details in a connected Google Sheet. Who Is This For? Recruiters & Hiring Managers**: Automate the resume screening process and save hours of manual work. HR Teams at Startups & SMBs**: Quickly evaluate talent without needing large HR ops infrastructure. Agencies & Talent Acquisition Firms**: Screen large volumes of resumes efficiently and with consistent criteria. Solo Founders Hiring for Roles**: Use AI to help score and shortlist top candidates from email applications. What Problem Does This Workflow Solve? Manually reviewing resumes is time-consuming, error-prone, and inconsistent. This workflow solves these challenges by: Automatically detecting and extracting resumes from Gmail attachments. Using OpenAI to intelligently extract candidate info from unstructured PDFs. Scoring resumes using customizable evaluation criteria (e.g., relevant experience, skills, education). Logging all candidate data (Name, Email, LinkedIn, Score) in a centralized, filterable Google Sheet. Enabling faster, fairer, and more efficient candidate screening. How It Works 1. Gmail Trigger Runs on a scheduled interval (e.g., every 6 or 24 hours). Scans a connected Gmail inbox (using OAuth credentials) for unread emails that contain PDF attachments. 2. Extract Attachments Downloads the attached resumes from matching emails. 3. Parse Resume Text Sends the PDF file to OpenAI's API (via GPT-4 or GPT-3.5 with file support or via base64 + PDF-to-text tool). Prompts GPT with a structured format to extract fields like Name, Email, LinkedIn, Skills, and Education. 4. Score Resume Evaluates the resume on predefined scoring logic using AI or logic inside the workflow (e.g., "Has X skill = +10 points"). 5. Log to Google Sheets Appends a new row in a connected Google Sheet, including: Candidate Name Email Address LinkedIn URL Resume Score Setup Accounts & API Keys You’ll need accounts and credentials for: n8n** (hosted or self-hosted) Google Cloud Platform** (for Gmail, Drive, and Sheets APIs) OpenAI** (for GPT model access) Google Sheet Make a Google Sheet and connect it via Google Sheets node in n8n. Columns should include: Name Email LinkedIn Score Configuration Google Cloud: Enable Gmail API and Google Sheets API. Set up OAuth 2.0 Credentials in Google Console. Connect n8n Gmail, Drive, and Sheets nodes to these credentials. OpenAI: Generate an API Key. Use the HTTP Request node or official OpenAI node to send prompt requests. n8n Workflow: Add Gmail Trigger. Add extraction logic (e.g., filter PDFs). Add OpenAI prompt for resume parsing and scoring. Connect structured output to a Google Sheets node. Requirements Accounts: n8n** Google** (Gmail, Sheets, Drive, Cloud Console) OpenAI** API Keys & Credentials: OpenAI API Key Google Cloud OAuth Credentials Gmail Access Scopes (for reading attachments) Configured Google Sheet OpenAI usage (after free tier) Google Cloud API usage (if exceeding free quota)
by Pavel Duchovny
Building agentic AI workflows often requires multiple moving parts: memory management, document retrieval, vector similarity, and orchestration. Until now, these pieces had to be custom-wired. But with the new native n8n nodes for MongoDB Atlas, we reduce that overhead dramatically. With just a few clicks: Store and recall long-term memory from MongoDB Query vector embeddings stored in Atlas Vector Search Use these results in your LLM chains and automation logic In this example we present an ingestion and AI Agent flows that focus around Travel Planning. The different interest points that we want the agent to know about can be ingested into the vector store. The AI Agent will use the vector store tool to get relevant context about those points of interest if it needs to. Prerequisites MongoDB Atlas project and Cluster OpenAI Valid API Key for embeddings (can be other provider) Gemini API Key for the LLM (can be other provider) How it works: There are 2 main flows. One is ingesting flow: Gets a document from a webhook and use MongoDB Vector Atlas to embed the document title and description into points_of_interest collection. Embeddings are stored in a field named embedding Embeddings used are OpenAI's but it can be any type of supported embedders. Second flow is an AI Agent node with Chat Memory Stored in MongoDB Atlas and a Vector Search node as a tool: Chat Message Trigger**: Chatting with the AI Agent will trigger the conversation store in the MongoDB Chat Memory node. When data is necessary like a location search or details it will go to the "Vector Search" tool. Vector Search Tool** - uses Atlas Vector Search index created on the points_of_interest collection: // index name : "vector_index" // If you change an embedding provider make sure the numDimensions correspond to the model. { "fields": [ { "type": "vector", "path": "embedding", "numDimensions": 1536, "similarity": "cosine" } ] } Additional Resources MongoDB Atlas Vector Search n8n Atlas Vector Search docs
by Lucas Correia
What Does This Flow Do? This workflow demonstrates how to dynamically generate a line chart using the QuickChart node based on data provided in a JSON object and then upload the resulting chart image to Google Drive. Use Cases You can use it in presentations or requesting for chart generation from a software with HTTP requests. Automated report generation (e.g., daily sales charts). Visualizing data fetched from APIs or databases. Simple monitoring dashboards. Adding charts to internal tools or notifications. How it Works Trigger: The workflow starts manually when you click 'Test workflow'. Set Sample Data: A Set node (Edit Fields: Set JSON data to test) defines a sample JSON object named jsonData. This object contains: reportTitle: A title (not used in the chart generation in this example, but useful for context). labels: An array of strings representing the labels for the chart's X-axis (e.g., ["Q1", "Q2", "Q3", "Q4"]). salesData: An array of numbers representing the data points for the chart's Y-axis (e.g., [1250, 1800, 1550, 2100]). Generate Chart: The QuickChart node is configured to: Create a line chart. Dynamically read labels from the jsonData.labels array (Labels Mode: From Array). Use the jsonData.salesData array as the input data (Note: This configuration places data in the top-level 'Data' field. For more complex charts with multiple datasets or specific dataset options, configure datasets under 'Dataset Options' instead). The node outputs the generated chart image as binary data in a field named data. Upload to Google Drive: The Google Drive node (Google Drive: Upload File): Takes the binary data (data) from the QuickChart node. Uploads the image to your specified Google Drive folder. Dynamically names the file based on its extension (e.g., chart.png). Setup Steps Import: Import this template into your n8n instance. Configure Google Drive Credentials: Select the Google Drive: Upload File node. You MUST configure your own Google Drive credentials. Click on the 'Credentials' dropdown and either select existing credentials or create new ones by following the authentication prompts. (Optional) Customize Google Drive Folder: In the Google Drive: Upload File node, you can change the Drive ID and Folder ID to specify exactly where the chart should be uploaded. Activate: Activate the workflow if you want it to run automatically based on a different trigger. How to Use & Customize Change Input Data:** Modify the labels and salesData arrays within the Edit Fields: Set JSON data to test node to use your own data. Ensure the number of labels matches the number of data points. Use Real Data Sources:** Replace the Edit Fields: Set JSON data to test node with nodes that fetch data from real sources like: HTTP Request (APIs) Postgres / MongoDB nodes (Databases) Google Sheets node Ensure the output data from your source node is formatted similarly (providing labels and salesData arrays). You might need another Set node to structure the data correctly before the QuickChart node. Change Chart Type:** In the QuickChart node, modify the Chart Type parameter (e.g., change from line to bar, pie, doughnut, etc.). Customize Chart Appearance:** Explore the Chart Options parameter within the QuickChart node to add titles, change colors, modify axes, etc., using QuickChart's standard JSON configuration options. Use Datasets (Recommended for Complex Charts):** For multiple lines/bars or more control, configure datasets explicitly in the QuickChart node: Remove the expression from the top-level Data field. Go to Dataset Options -> Add option -> Add dataset. Set the Data field within the dataset using an expression like {{ $json.jsonData.salesData }}. You can add multiple datasets this way. Change Output Destination:** Replace the Google Drive: Upload File node with other nodes to handle the chart image differently: Write Binary File: Save the chart to the local filesystem where n8n is running. Slack / Discord / Telegram: Send the chart to messaging platforms. Move Binary Data: Convert the image to Base64 to embed in HTML or return via webhook response. Nodes Used Manual Trigger Set QuickChart Google Drive Tags: (Suggestions for tags field) QuickChart, Chart, Visualization, Line Chart, Google Drive, Reporting, Automation
by Hueston
Who is this for? Sales professionals looking to build lead lists from target company domains Business development teams conducting outreach campaigns Marketers building contact databases for account-based marketing Recruiters searching for potential candidates at specific companies Anyone needing to transform a list of company domains into actionable contact information What problem is this workflow solving? Finding business email addresses for outreach is a time-consuming process. The Apollo API doesn't provide a direct way to extract email contacts from domains in a single call. This workflow bridges that gap by: Automating the two-step process required by Apollo's API Processing multiple domains in batches without manual intervention Extracting, enriching, and storing contact information in a structured format Eliminating hours of manual data entry and API interaction What this workflow does This workflow creates an automated pipeline between Google Sheets and Apollo's API to: Pull a list of target domains from a Google Sheet Submit each domain to Apollo's search API to find associated people Loop through each person found and enrich their profile data Extract key information: name, title, email address, and LinkedIn URL Write the enriched contact information back to a results sheet Process the next domain automatically until all are complete Setup Prerequisites: An n8n instance (cloud or self-hosted) Apollo.io account with API access Google account with access to Google Sheets Google Sheets Setup: Create a new Google Sheet with two tabs: Tab 1: "Target Domains" with a column named "Domain To Enrich" Tab 2: "Results" with columns: Company, First Name, Last Name, Title, Email, LinkedIn n8n Setup: Import the workflow JSON into your n8n instance Set up Google Sheets credentials in n8n Update the Google Sheets document ID in both Google Sheets nodes Add your Apollo API key to both HTTP Request nodes Review and adjust API rate limits if needed Testing: Add a few test domains to your "Target Domains" sheet Run the workflow manually to verify it's working correctly Check the "Results" sheet to confirm data is being properly populated How to customize this workflow to your needs Adding More Contact Fields: Modify the "Clean Up" node to extract additional fields from the Apollo API response Add corresponding columns to your "Results" sheet Update the "Results To Results Sheet" node mapping to include the new fields Filtering Results: Add a Filter node after "Clean Up" to include only contacts with specific roles Create conditions based on title, seniority, or other fields returned by Apollo Automating Workflow Execution: Replace the manual trigger with a Schedule Trigger to run daily/weekly Add a Filter node to process only domains with "Not Processed" status Update the status field in Google Sheets after processing Additional Notes This workflow respects Apollo's API rate limits by processing one contact at a time The Apollo API may not return contact information for all domains or all employees Consider legal and privacy implications when collecting and storing contact information Made with ❤️ by Hueston
by Aitor | 1Node
Turn Gumroad buyers into loyal email subscribers and keep your CRM up‑to‑date. When someone makes a purchase on your Gumroad store, this n8n workflow instantly adds that customer to the right MailerLite group (so your nurture sequence starts on time) and writes the sale details into your Google Sheets CRM. You’ll never copy‑and‑paste orders again, and every buyer begins receiving your follow‑up emails the moment they purchase. Requirements A Gumroad account with a product listed A MailerLite account. A MailerLite group of subscribers created Enabled APIs and credentials for Google Sheets, MailerLite and Gumroad How it works Listen for a new sale on Gumroad** The Gumroad trigger watches your account 24/7 and fires as soon as a sale is completed. Create (or update) the subscriber in MailerLite** Their name and email are added to MailerLite. If they already exist, the workflow simply updates their profile. Assign the subscriber to your Gumroad group** Grouping lets your MailerLite automation send the right onboarding or upsell sequence without manual tagging. Log the purchase in Google Sheets** The buyer’s contact details, product, price, and date are appended as a new row in your CRM sheet. Set‑up steps Create an application in Gumroad. Copy the access token, you’ll paste it into the Gumroad trigger node. Grab your MailerLite API key MailerLite dashboard → Integrations → API. Paste it into the two MailerLite nodes. Prepare a Google Sheets spreadsheet Add column headers like Name, Email, Product, Price, Date. Open the template in n8n Cloud or Desktop In the Gumroad node, paste your token. In the MailerLite nodes, paste your API keys and replace the group id. In the Google Sheets node, replace the credentials, pick your spreadsheet and worksheet. Get in touch with us Feel free to contact us at 1 Node. Get instant access to a library of free resources we created.