by Syamsul Bahri
This workflow integrates HetrixTools with WhatsApp via the GOWA API to automate notifications about server monitoring events. It distinguishes between Uptime Monitoring and Resource Usage Monitoring events, formats the message accordingly, and sends it to a WhatsApp number using the GOWA WhatsApp REST API. It's especially useful for DevOps, sysadmins, or teams who need real-time server alerts delivered via WhatsApp. ⚙️ Setup Instructions Set up HetrixTools: Create a HetrixTools account at https://hetrixtools.com/register Create your Uptime Monitors and/or enable Resource Usage Monitoring for your servers. Go to your HetrixTools contact settings and add the n8n Webhook URL provided by the workflow. Make sure to select this contact in your monitor’s alert settings. Configure n8n Webhook: Set the Webhook node to HTTP method: POST Ensure it is accessible via a public URL (you can use n8n Cloud, reverse proxy, or tunnel like ngrok for testing). Customize WhatsApp Message: The workflow includes a conditional branch to check whether the event is a Resource Usage alert or an Uptime alert. Each branch contains editable text nodes for customizing the WhatsApp message content. Set up GOWA WhatsApp API: Make sure your GOWA instance is running and accessible. Create necessary credentials (API key, base URL, etc.). In n8n, add the credentials and fill in the sendChatPresence and sendText nodes accordingly. Deploy the Workflow: Save and activate the workflow. Trigger a test alert from HetrixTools to verify if messages are received on WhatsApp. 🧱 Prerequisites An active HetrixTools account with Uptime Monitors or Resource Usage Monitoring enabled. A publicly accessible instance of n8n with Webhook node enabled. Access to a running and configured GOWA (WhatsApp REST API) server. Required credentials configured in n8n for GOWA (API key, URL, phone number, etc.).
by JustCallMeBlue
Greetings! If you're in need of a quick and dirty cache that doesn't need anything other than the current version of N8N, boy do I have a dodgy script for you to try! Presenting a simple template for a cache layer using the new N8N Tables Beta! This flow can easily be adapted to meet your home lab or enterprise needs by swapping out the n8n tables nodes with a external database like redis or even a google sheet if one was so inclined. Why Would You Want This? It's simple really, ever had your flow crash because one of the API's you're using has a rate limit less than the 10,000 GET requests you're throwing at it per second (yawn). Well caching can help by adding a small buffer on your side if the data is not likely to change between requests. Simply add a call to this flow to check the cache and if there's nothing there it will throw a error which you can detect and respond to by grabbing fresh data from the API. How This Flow Works This flow does three simple steps in order. Check what you want to do to the cache table, read or write. If Reading, lookup the data | If writing, write the data. If Reading, validate and return | If writing, just return data written for chaining. This subflow will return the JSON.parse() representation of the string currently stored at the key in the cache table. This will be the same value written by the cache write input to this node if it has not expired. If no value is found in the cache table for the input key, then a error is thrown. Listen for this error by setting your error response mode to be {On Error: "Continue (Using Error Output)"} in the node settings. This is your signal to "refresh" the cache by writing a new value to the cache. Inputs Action Read Cache "cacheKey": {any string} Action Write Cache "cacheKey": {any string}, "trueToWrite": true, "writeValue": {any value including null. You are limited to data size of the table string field so don't stuff 20MB of JSON here.}, "writeTTLms": {optional, any number above 0 as milliseconds. Defaults to 10000} Setup Ok onto the good bit, how to set this up locally for yourself. This flow Requires the Beta version of N8N Tables to function, so update if you are running a older version. You will need a table called "cache" <- All lowercase. That table will need the following columns, again all lowercase: key: string ttl: datetime value: string Once you create this table, download, import, or copy paste the flow into your N8N. Got to every "table" node in the flow and update the settings of the node to point to your newly created table (Be sure to press the "refresh" icon in the node configuration menu to ensure you're binding to the correct columns in the table. It should appear after you update the table. It's small so keep a eye out). How to Use Call this flow via the Execute Sub-Flow node with the specified inputs. (Optional) You can also "activate" this flow to enable hourly cleaning of the cache table to help keep data sizes down. This is a example for a quick cache that can not / should not hold onto a large amount of data, be sure to pay attention to the current 50MB limit of tables as writing lots of large data blocks will result in limits being hit. Quick FAQ Why is this a Example not a proper cache? Good Question, glad you asked. This flow is "good enough" for a lot of simple usecases and should be replaced with a more robust caching solution when possible for better performance and memory is going to become a bottleneck. If you're just checking the weather API for yourself and a few friends to serve on your test app, this will be fine. If you're trying to create the next twitter (now called X) where there's gigs of data moving at any second, and you need a large cache to make sure your database doesn't go down, perhaps consider not using a N8N flow for that purpose. Ok so when should I use this then? Well like it was mentioned above, when you have relatively small amounts of data to store and it's not likely to change very often, then this flow should solve your problem. Quick examples of things to cache include: Checking the weather once daily and storing it. Scraping a webstore to check for a daily price of a product you're looking to buy once it's on sale. Checking google trends to see if your company is trending right now. Those examples seem odd, why'd you choose those? Because Caching is one of those hard things in computer science that no one really gets right "cache invalidation, naming things, and off-by-1 errors". Caching should be done on a case by case basis really. For example; the author of this workflow uses it behind some webhook flows for data that takes a long time to grab from the database but only changes 2-5 times a day. Do I have to do things the way you did? No? This is a example of something I threw together in a afternoon to solve a real problem I had. N8N is very flexible and this happened to work out for me. If you don't like the way the example names things or implements its cache invalidation: try things yourself and experiment, you may find that this example will solve your problems by itself without being overly complex. Only you know what will and wont work to solve your problems.
by Mohammadreza azari
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Google Search Console – Discover New Keywords (Last 7 Days) This n8n template demonstrates how to identify new queries in Google Search Console that had no impressions in the past but appeared for the first time in the last 7 days. It also segments them into two groups: queries with impressions but no clicks yet (Zero Click) and queries that already have clicks (Has Click). Use cases include: Finding emerging SEO opportunities. Identifying keywords where you already get impressions but need to optimize for clicks. Tracking newly discovered queries week over week. Good to know This workflow requires a connected Google Search Console account. You can adjust the date ranges in the Compare search analytics node to suit your needs. Works best when scheduled weekly, but in this template we start with manual execution for flexibility. How it works Manual Start – Run the workflow manually to fetch fresh data. Compare Search Analytics – Compares the last 7 days against a custom reference period in Google Search Console. Filter (No Past Impressions) – Keeps only queries that had zero impressions in the reference period. Zero Click – Filters queries with impressions but no clicks. Has Click – Filters queries with impressions and clicks. The final output is two clean data sets: Zero Click queries**: impressions but no clicks → improve meta descriptions, titles, or content relevance. Has Click queries**: new queries already generating clicks → consider creating supporting content and optimizing further. How to use Start with the Manual Start node. Add your Google Search Console credentials in the Compare search analytics node. Optionally, replace the Manual Start with a Schedule Trigger (e.g., weekly) to automate monitoring. Export results or connect them to Slack, Email, or Google Sheets for reporting. Requirements Google Search Console account** (with property access). n8n instance with the Google Search Console integration enabled. Customising this workflow Adjust the date ranges in the compare node (e.g., last 30 days vs. previous 30 days). Replace the output branches with integrations like Slack, Notion, or Google Sheets to automatically deliver keyword reports. Extend the workflow with additional filters, such as filtering by CTR, country, or device.
by System Admin
No description available
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,