by Trung Tran
AWS IAM Access Key Rotation Reminder Automation Workflow Watch the demo video below: Who’s it for DevOps/SRE teams responsible for AWS account security. Security/compliance officers ensuring key rotation policies are followed. Any AWS account owner who wants automatic detection of stale access keys. How it works / What it does Weekly Scheduler — triggers the workflow on a recurring basis. Get Many Users — fetches all IAM users in the AWS account. Get User Access Key(s) — retrieves the access keys associated with each user. Filter Out Inactive Keys — removes keys that are not active (e.g., status Inactive). Access Key Older Than 365 Days — checks the key creation date and flags keys older than one year. Send Slack Message — notifies a Slack channel with details of the outdated key(s) for review and action. No Operation — safely ends the workflow if no keys match the condition. How to set up Configure the Weekly Scheduler to run at your desired cadence (e.g., every Monday). Use Get Many Users to list all IAM users. For each user, call ListAccessKeys (Get User Access Key(s)) to fetch their key metadata. Apply a filter to keep only keys with status Active. Add a condition to compare CreateDate against today - 365 days. Send results to Slack using the Slack Post Message node. Requirements n8n (latest version). AWS credential in n8n configured for us-east-1 (IAM requires signing with this region). IAM permissions: iam:ListUsers iam:ListAccessKeys Slack bot credentials with permission to post messages in the desired channel. How to customize the workflow Change threshold** — adjust the 365 days condition to 90, 180, or any other rotation policy. Escalation** — mention @security or create a Jira/Ticket when old keys are found. Logging** — push flagged results into a Google Sheet, database, or log management system for audit. Automation** — instead of only notifying, add a step to automatically deactivate keys older than the threshold (after approval). Multi-account support** — duplicate or loop across multiple AWS credentials if you manage several AWS accounts.
by Łukasz
What Is This? This workflow monitors your Elastic Email subaccounts daily and sends a Slack alert whenever an account's email credit balance drops below a configurable threshold. It's a simple but essential guard against unexpected sending failures caused by depleted credits. Who Is It For? Any team or agency managing multiple Elastic Email subaccounts — marketing departments, email service providers, or developers running automated email campaigns — who want proactive warnings before credits run out. How Does It Work? The workflow runs once a day on a schedule. It calls the Elastic Email REST API to retrieve all subaccount data, then filters out any accounts with a credit balance below the minimum you define in the Config node. If any low-credit accounts are found, a Slack message with all accounts their email address and current credit balance will be sent. API errors are caught separately and also reported to Slack. How To Set It Up? Prerequisites: An Elastic Email account with API access A Slack workspace with bot permissions Step 1: Set the credit threshold In the Config node, set Minimum amount of Email Credits to the value below which you want to be notified (default: 100). Step 2: Configure Elastic Email credentials In the Load EE Subaccounts HTTP Request node, create a new Custom Auth credential with the following JSON: { "headers": { "X-ElasticEmail-ApiKey": "<Your EE Auth Token>" } } You can generate an API token directly in your Elastic Email account settings. Step 3: Configure Slack credentials Connect your Slack workspace using OAuth2 in both Slack nodes, and update the channel ID to your desired notification channel. Need help? Reach out at developers@sailingbyte.com or visit sailingbyte.com. Happy hacking!
by PollupAI
This workflow provides a powerful way to automatically document and maintain an inventory of all your n8n workflows in a Google Sheet. By running on a schedule or manually, it fetches details about every workflow on your instance, processes the key information, and then populates a spreadsheet. This creates a centralized, up-to-date dashboard for auditing, monitoring, and understanding your automation landscape. Who is this for? This workflow is ideal for n8n administrators, developers, and teams who manage multiple workflows. If you need a clear and simple way to track all your automations, their components, and their statuses without manually checking each one, this template is for you. It's particularly useful for maintaining technical documentation, auditing node usage across your instance, and quickly finding specific workflows. What problem is this workflow solving? As the number of workflows on an n8n instance grows, it becomes challenging to keep track of them all. Questions like "Which workflows use the HubSpot node?", "Which workflows are inactive?", or "When was this workflow last updated?" become difficult to answer. This workflow solves that problem by creating a single source of truth in a Google Sheet. It automates the process of cataloging your workflows, saving you time and ensuring your documentation is always current. What this workflow does Triggers Execution: The workflow can be initiated either on a set schedule (via the Scheduled Start node) or manually (via the Manual Start node). Fetches All Workflows: The Get All Workflows node connects to your n8n instance via the API to retrieve a complete list of your workflows and their associated data. Processes Workflows Individually: The Loop Through Each Workflow node iterates through each retrieved workflow one by one so they can be processed individually. Extracts Key Information: The Extract Workflow Details node uses custom code to process the data for each workflow, extracting essential details like its name, ID, tags, and a unique list of all node types it contains. Updates Google Sheet: The Add/Update Row in Google Sheet node then takes this information and appends or updates a row in your designated spreadsheet, using the workflow ID as a unique key to prevent duplicates. Waits and Repeats: The Pause to Avoid Rate Limits node adds a short delay to prevent issues with API limits before the loop continues to the next workflow. Setup Configure Get All Workflows Node: Select the Get All Workflows node. In the 'Credentials' section, provide your n8n API credentials to allow the workflow to access your instance's data. Prepare Your Google Sheet: Create a new Google Sheet. Set up the following headers in the first row: id, title, link, tags, nodes, CreatedAt, UpdatedAt, Active, Archived. Configure Add/Update Row in Google Sheet Node: Select the Add/Update Row in Google Sheet node. Authenticate your Google account in the 'Credentials' section. In the 'Document ID' field, enter the ID of your Google Sheet. You can find this in the sheet's URL (e.g., .../spreadsheets/d/THIS_IS_THE_ID/edit). Select your sheet from the 'Sheet Name' dropdown. Under 'Columns', ensure the id field is set as the 'Matching Columns' value. This is crucial for updating existing rows correctly. Activate the Workflow: Choose your preferred trigger. You can enable the Schedule Trigger to run the sync automatically at regular intervals. Save and activate the workflow. How to customize this workflow to your needs Track Different Data**: You can modify the Extract Workflow Details node to extract other pieces of information from the workflow JSON. For example, you could parse the settings object or count the total number of nodes. Remember to add a corresponding column in your Google Sheet and map it in the Google Sheets node. Add Notifications**: Add a notification node (like Slack, Discord, or Email) after the Loop Through Each Workflow node (in the second output) to be alerted when the sync is complete or if an error occurs. Filter Workflows**: You can add an IF node after the Loop Through Each Workflow node to filter which workflows get added to the sheet. For instance, you could choose to only log active workflows ({{ $('Loop Through Each Workflow').item.json.active }} is true) or workflows containing a specific tag. Adjust Wait Time**: The Pause to Avoid Rate Limits node is set to pause between each entry. You can adjust this time or remove it entirely if you have a small number of workflows and are not concerned about hitting API rate limits.
by isaWOW
Automatically track domain expiry dates from Google Sheets, fetch real-time DNS expiry data via WHOIS API, and update expiry details back to your sheet with zero manual effort. Automated Domain Expiry Date Tracker with Google Sheets & WHOIS API Automate the entire process of monitoring domain expiry dates for all your websites directly from Google Sheets. This workflow reads domain names, fetches DNS SOA expiry information using the WHOIS API, converts timestamps into readable dates, and updates expiry details back into your tracking sheet—fully automated and rate-limit safe. Perfect for SEO teams, agencies, hosting managers, and businesses managing large domain portfolios. What this workflow does This automation handles four key tasks: Reads domain data Pulls all website domains directly from a Google Sheet Fetches expiry details Uses WHOIS API to retrieve DNS SOA records for each domain Processes expiry dates Converts expiry timestamps into human-readable DD-MM-YYYY format Extracts expiry month and year automatically Updates tracking sheet Writes expiry date, month name, and year back to Google Sheets Processes domains one by one with a controlled delay to avoid API limits How it works The workflow starts manually and loads configuration values such as Google Sheet ID and sheet name. It reads all domains listed in the Websites column and processes them in a loop. For each domain, the workflow calls the WHOIS API to fetch DNS SOA records. The expiry timestamp is extracted, converted into a readable date format, and enriched with expiry month and year values. Once processed, the workflow updates the same Google Sheet row with the new expiry information. A 30-second pause is applied before moving to the next domain to ensure API safety and stability. Setup requirements Accounts needed: n8n instance (self-hosted or cloud) Google account with Google Sheets access RapidAPI account with WHOIS API access Estimated setup time: 10 minutes Setup steps 1. Import workflow Copy the workflow JSON Open n8n → Workflows → Import from JSON Paste and import Verify all nodes are connected correctly 2. Configure Google Sheets Create a Google Sheet with a Websites column Add Google Sheets OAuth2 credential in n8n Paste your Sheet ID and sheet name inside Set Sheet Configuration node 3. Configure WHOIS API Get your RapidAPI WHOIS API key Add it to the Fetch DNS Records via WHOIS API HTTP Request node Test the API request 4. Verify data mapping Ensure expiry values map correctly to: Domain Expiry Expiry Month Expiry Year 5. Run and monitor Run the workflow manually Check execution logs Verify expiry data updates correctly in Google Sheets What data gets updated Domain data: Domain name Expiry date (DD-MM-YYYY) Expiry month (January–December) Expiry year Sheet updates: Existing rows are matched using the Websites column No duplicate rows are created Use cases SEO management:** Prevent domain expiries that can hurt rankings Agency operations:** Track client domains in one central sheet Hosting monitoring:** Stay ahead of renewal deadlines Portfolio management:** Manage hundreds of domains automatically Important notes Replace the WHOIS API key before activating Google Sheets column names must match exactly Workflow runs sequentially to avoid rate limits One domain is processed at a time Expiry accuracy depends on DNS SOA availability Support Need help or custom development? 📧 Email: info@isawow.com 🌐 Website: https://isawow.com/
by Asfandyar Malik
Short Description: Automatically collect and analyze your competitor’s YouTube performance. This workflow extracts video titles, views, likes, and descriptions from any YouTube channel and saves the data to Google Sheets — helping creators spot viral trends and plan content that performs. Who’s it for For content creators, YouTubers, and marketing teams who want to track what’s working for their competitors — without manually checking their channels every day. How it works This workflow automatically collects data from any YouTube channel you enter. You just write the channel name in the form — n8n fetches the channel ID, gets all recent video IDs, and extracts each video’s title, views, likes, and description. Finally, all the information is saved neatly into a connected Google Sheet for analysis. How to set up Create a Google Sheet with columns for Title, Views, Likes, Description, and URL. Connect your Google account to n8n. Add your YouTube Data API key inside the HTTP Request nodes (use n8n credentials, not hardcoded keys). Update your form submission or trigger node to match your input method. Execute the workflow once to test and verify that data is flowing into your sheet. Requirements YouTube Data API key Google Sheets account n8n cloud or self-hosted instance How to customize You can modify the JavaScript code node to include more metrics (like comments or publish date), filter by keywords, or change the output destination (e.g., Airtable or Notion).
by WeblineIndia
Real-Time Uptime Alerts to Jira with Smart Slack On-Call Routing This workflow automatically converts uptime monitoring alerts received via webhook into Jira incident tasks and intelligently notifies an available on-call team member on Slack based on their real-time presence status. It ensures critical service outages never go unnoticed by selecting an active responder and sending a detailed direct message immediately. ⚡ Quick Implementation Steps Import the workflow JSON into n8n. Configure your Webhook, Slack, and Jira credentials. Update the IF node to filter for status = down (already configured). Set the Jira project and issue type as required. Connect your Slack on-call channel. Activate the workflow and send a test alert using Postman or your monitoring tool. What It Does This automation listens for incoming alerts from any uptime monitoring service. When a system or service goes down, the workflow instantly validates whether the alert is critical (status = down). Once validated, it automatically creates a detailed Jira Task containing all relevant service details such as timestamp, downtime duration, error code, customer impact and priority. After the Jira incident is created, the workflow retrieves a list of all members from a dedicated Slack on-call rotation channel. It checks each member’s Slack presence (active, away, offline) and uses smart selection logic to choose the best person to notify. The selected team member then receives a richly formatted direct Slack message containing all incident details and a link to the Jira ticket. This ensures the alert is not only logged properly but also reaches the right responder at the right time. Who’s It For This workflow is perfect for: DevOps teams managing uptime & system reliability. Support teams responsible for incident response. SRE teams using Jira and Slack. Organizations with an on-call rotation setup. Teams wanting automated escalation for downtime alerts. Requirements to Use This Workflow n8n installed** (self-hosted or cloud) Slack API credentials** with permission to read user presence and send direct messages Jira Software Cloud** credentials allowing issue creation A monitoring system** capable of sending webhook alerts (e.g., UptimeRobot, Uptime Kuma, StatusCake, custom system, etc.) Access to a Slack channel that includes your on-call rotation members How It Works & How to Set Up Step 1: Receive Alert from Uptime Monitoring Tool The workflow starts with the Webhook node (Receive Uptime Alert). Your monitoring tool must send a POST request with JSON payload including fields like: serviceName status timestamp customerImpact errorCode priority etc. Step 2: Filter for Critical Status The IF node (Filter for Critical Status) checks: Only when the service is down does the workflow continue to create a Jira incident. Step 3: Create Jira Incident Task The Create New Jira Incident node generates a Jira Task with: Summary: serviceName + timestamp Description: dynamic fields based on the alert payload Set your Jira Project and Issue Type as needed. Step 4: Fetch Slack On-Call Channel Members The workflow calls Slack API to retrieve all user IDs in a designated channel (e.g., #on-call-team). Step 5: Loop Through Each Member Split In Batches Node** loops each Slack member individually. For each user, their Slack presence is fetched using: Step 6: Build Final Data for Each User The Set node (Collect & Set Final Data) stores: presence member ID service details Jira ticket ID downtime info and more Step 7: Select the Best On-Call User A custom Code node uses presence-based logic: Selection Logic If one or more users are active → randomly pick one active user. If only one user is active → pick that user. If no users are active → default to the first member from the channel. This ensures you always get a responder. Step 8: Notify Selected User The Slack Notify Node sends a formatted direct message with: service status downtime duration error code customer impact Jira ticket link priority The selected on-call responder receives everything they need to act immediately. How to Customize Nodes Webhook Node Change the path to something meaningful (e.g., /uptime-alerts). Customize expected fields based on your monitoring tool's payload. IF Node Modify status condition for: "critical" "error" or multiple conditions Jira Node You can customize: Issue type (Incident, Bug, Task) Priority field mapping Project ID Custom fields or labels Slack Retrieval Node Change the channel to your team's actual on-call rotation channel. Slack Message Node Modify message formatting, tone, emojis, or add links. Add @mentions or tags. Include escalation instructions. Add-Ons (Optional Extensions) Enhance the workflow by adding: 1. Escalation Logic If the selected user doesn’t respond within X minutes, notify next user. 2. PagerDuty / OpsGenie Integration Trigger paging systems for SEV-1 incidents. 3. Status Page Updates Automatically update public status pages. 4. Auto-Resolution When service status returns to up, automatically: Update Jira ticket Notify the team Close the incident 5. Logging & Analytics Store incidents in Google Sheets, Notion, or a database. Use Case Examples This workflow can support multiple real-world scenarios: Website Uptime Monitoring If your main website goes down, instantly create a Jira incident and notify your on-call engineer. API Downtime Alerting When an API endpoint fails health checks, alert active developers only. Microservices Monitoring Each microservice alert triggers a consistent, automated incident creation and notification. Infrastructure Failure Detection When servers, containers, or VMs become unreachable, escalate to your infrastructure team. Database Performance Degradation If DB uptime drops or error rate spikes, create a Jira ticket and ping the database admin. And many more variations of outage, error, and performance monitoring events. Troubleshooting Guide | Issue | Possible Cause | Solution | |-------|----------------|----------| | Workflow not triggering | Webhook URL not updated in monitoring tool | Copy n8n webhook URL and update in monitoring source | | No Jira ticket created | Invalid Jira credentials or missing project permissions | Reauthorize Jira credentials and verify permissions | | Slack users not found | Wrong channel ID or bot not added to channel | Ensure bot is invited to the Slack channel | | Slack presence not returning | Slack app lacks presence permission (users:read.presence) | Update Slack API scopes and reinstall | | No user receives notification | Presence logic always returns empty list | Test Slack presence API and verify real-time presence | | Wrong user selected | Intended selection logic differs | Update the JS logic in the code node | | Jira fields not populated | Alert payload fields missing | Verify webhook payload structure and match expected fields | Need Help? If you need assistance setting up this workflow, customizing integrations, building escalations or extending the logic with add-ons — WeblineIndia is here to help. We can assist with: Custom Slack/Jira/Monitoring automation On-call rotation logic enhancements Cloud deployment & workflow optimization Any custom n8n automation Production-grade monitoring workflows 👉 Contact WeblineIndia for professional support, implementation and custom workflow development.
by Christine
Who’s it for This workflow is designed for creators, researchers, and operators who need to transcribe large volumes of video content stored in Google Drive. It is especially useful for users working with TikTok archives, interview recordings, or social media datasets where manual transcription would be time-consuming and expensive. How it works / What it does This workflow automatically processes videos from a designated Google Drive folder, transcribes each file using Google Gemini, and saves the results as individual text files. Each video is handled independently: Videos are pulled from an “Incoming” folder The file is downloaded and sent to Gemini for transcription A .txt transcript file is created in a separate folder The original video is moved to a “Processed” folder after success This structure ensures progress is saved continuously, allowing the workflow to resume where it left off if interrupted. How to set up Create three Google Drive folders: Incoming (source videos) Processed (completed videos) Transcripts (output files) Add your folder IDs to the Config node Connect your Google Drive and Gemini credentials Run the workflow using the Manual Trigger (or Schedule Trigger for automation) Requirements n8n instance (cloud or self-hosted) Google Drive API credentials Google Gemini API access Video files stored in Google Drive How to customize the workflow Adjust the transcription prompt in the Config node for different output styles Modify the Wait node to control processing speed and avoid rate limits Change file naming conventions in the formatting node Add logging or notifications for failed transcriptions Extend the workflow to combine transcripts into a single master document
by Miha
This n8n template turns raw call transcripts into clean HubSpot call logs and a single, actionable follow-up task—automatically. Paste a transcript and the contact’s email; the workflow finds the contact, summarizes the conversation in 120–160 words, proposes the next best action, and (optionally) updates missing contact fields. Perfect for reps and founders who want accurate CRM hygiene without the manual busywork. How it works A form trigger collects two inputs: Contact email Plain-text call transcript The workflow looks up the HubSpot contact by email to pull known properties. An AI agent reads the transcript (plus known fields) to: Extract participants, role, problem/opportunity, requirements, blockers, timeline, and metrics. Write a 120–160 word recap a teammate can skim. Generate one concrete follow-up task (title + body). Suggest updates for missing contact properties (city, country, job title, job function). The recap is logged to HubSpot as a completed Call engagement. The follow-up is created in HubSpot as a Task with subject and body. (Optional) The contact record is updated using AI-suggested values if the transcript clearly mentions them. How to use Connect HubSpot (OAuth2) on all HubSpot nodes. Connect OpenAI on the AI nodes. Open Form: Capture Transcript, submit the email + transcript. (Optional) In AI: Summarize Call & Draft Task, tweak prompt rules (word count, date normalization). (Optional) In Update Contact from Transcript, review the mapped fields before enabling in production. Activate the workflow and paste transcripts after each call. Requirements HubSpot** (OAuth2) for contact search, call logging, and tasks OpenAI** for summarization and task drafting Notes & customization ideas Swap the form for a Google Drive or S3 watcher to ingest saved transcripts. Add a speech-to-text step if you store audio recordings. Extend Update Contact to include additional fields (timezone, department, seniority). Post the summary to Slack or email the AE for quick handoffs. Gate updates with a confidence check, or route low-confidence changes for manual approval.
by Adil Khan
This workflow integrates Google Analytics 4 (GA4) with Slack, enabling users to query their website data using natural language inside a dedicated Slack channel. An AI Agent interprets user queries, fetches relevant reports from GA4, and responds in Slack as a reply. How it works When a user sends a message in a specified Slack channel, the workflow is triggered. The message is filtered to remove @bot mentions, and then passed to an AI Agent. The AI Agent, powered by a Google Gemini Chat Model and utilizing conversational memory (to have back-and-forth with user on follow up questions, limit of 10), determines if the user's query requires data from Google Analytics 4. If so, it leverages a pre-configured GA4 tool to fetch the necessary report (e.g., page views, users, conversions for a specific date range). Finally, the AI Agent's response, containing the requested data, is sent back to the original Slack channel as a reply. Setup Steps Slack Trigger: Configure the Slack API credential and specify the channel n8n should monitor for new messages. Credentials: Create and configure the following credentials in n8n: Slack API: For sending and receiving messages. Google Analytics 4: For accessing GA4 reports. Requires a Google Cloud Project with the Analytics Data API enabled and a Service Account Key (JSON). Google Gemini Chat Model: For the AI Agent's intelligence. Requires an API key from Google AI Studio. AI Agent System Prompt: Craft a robust system prompt for the AI agent. This prompt should define the agent's role, constraints (e.g., "do not estimate or lie on data, if GA4 is unavailable, inform so"), and guidance on mapping natural language metrics/dimensions to GA4 equivalents (e.g., "when the user mentions 'leads', they mean 'conversions' in GA4"). Slack Reply: Ensure the final Slack "Send a message" node is configured to reply to the original channel, providing the data in a clear, concise format.
by Rahul Joshi
##📘 Description This workflow acts as a real-time emergency alert system designed for personal safety scenarios. It receives distress signals via webhook, enriches the data with a live Google Maps link and timestamp, generates a clear AI-formatted alert message, instantly notifies a Telegram group, and logs the incident in Google Sheets for tracking and audit. ⚙️ Step-by-Step Flow 1) Emergency Webhook (Trigger) 2) Receives a POST request containing: Name Phone number Latitude & Longitude Acts as the entry point for emergency alerts. 3) Generate Maps Link (Function Node) Converts latitude & longitude into a Google Maps URL Ensures responders can access live location instantly 4) Create Emergency Message (Function Node) Adds timestamp (IST) Structures raw alert data Prepares base emergency message context 5) AI Agent (OpenAI GPT-4o-mini) Formats the alert into a clean, urgent, human-readable message Ensures clarity, visibility, and consistency Adds emojis and structured layout for quick comprehension 6) Telegram Group Alert (Telegram Node) Sends real-time emergency notification to a predefined group Ensures immediate visibility for responders 7) Log to Google Sheets (Google Sheets Node) Stores alert data for records Fields logged: name, phone, maps link, message, timestamp Creates audit trail for safety tracking 🧩 Prerequisites • Telegram Bot credentials + chat ID • OpenAI API (GPT-4o-mini) • Google Sheets OAuth connection • Webhook endpoint exposed publicly 💡 Key Benefits ✔ Instant emergency alert delivery ✔ Live location sharing via Google Maps ✔ AI-enhanced message clarity (no ambiguity) ✔ Real-time group notification for faster response ✔ Persistent logging for audit and follow-up 👥 Perfect For Women safety applications SOS mobile apps or panic button systems Security teams and emergency response workflows Community safety networks and alert systems
by Rahul Joshi
Description Keep your internal knowledge base fresh and reliable with this automated FAQ freshness monitoring system. 🧠📅 This workflow tracks FAQ update dates in Notion, calculates SLA compliance, logs results in Google Sheets, and sends Slack alerts for outdated items. Perfect for documentation teams ensuring content accuracy and operational visibility across platforms. 🚀💬 What This Template Does 1️⃣ Triggers every Monday at 10:00 AM to start freshness checks. ⏰ 2️⃣ Fetches FAQ entries from your Notion database. 📚 3️⃣ Computes SLA status based on the last edited date (30-day threshold). 📆 4️⃣ Updates a Google Sheet with current FAQ details and freshness status. 📊 5️⃣ Filters out overdue FAQs that need review. 🔍 6️⃣ Aggregates all overdue items into one report. 🧾 7️⃣ Sends a consolidated Slack alert with direct Notion links and priority tags. 💬 Key Benefits ✅ Maintains documentation freshness across systems. ✅ Reduces support friction from outdated FAQs. ✅ Centralizes visibility with Google Sheets reporting. ✅ Notifies your team in real time via Slack. ✅ Enables SLA-based documentation governance. Features Weekly automated schedule (every Monday at 10 AM). Notion database integration for FAQ retrieval. SLA computation and overdue filtering logic. Google Sheets sync for audit logging. Slack notification for overdue FAQ alerts. Fully configurable thresholds and alerting logic. Requirements Notion API credentials with database read access. Google Sheets OAuth2 credentials with edit access. Slack Bot Token with chat:write permission. Environment variables : NOTION_FAQ_DATABASE_ID GOOGLE_SHEET_FAQ_ID SLACK_FAQ_ALERT_CHANNEL_ID Target Audience Knowledge management and documentation teams 🧾 SaaS product teams maintaining FAQ accuracy 💡 Support operations and customer success teams 💬 QA and compliance teams monitoring SLA adherence 📅 Step-by-Step Setup Instructions 1️⃣ Connect Notion credentials and set your FAQ database ID. 2️⃣ Create a Google Sheet with required headers (Title, lastEdited, slaStatus, etc.). 3️⃣ Add your Slack credentials and specify the alert channel ID. 4️⃣ Configure the cron schedule (0 10 * * 1) for Monday 10:00 AM checks. 5️⃣ Run once manually to verify credentials and mappings. 6️⃣ Activate for ongoing weekly freshness monitoring. ✅
by Trung Tran
AWS IAM Inactive User Automation Alert Workflow > Weekly job that finds IAM users with no activity for > 90 days and notifies a Slack channel. > ⚠️ Important: AWS SigV4 for IAM must be scoped to us-east-1. Create the AWS credential in n8n with region us-east-1 (even if your other services run elsewhere). Who’s it for SRE/DevOps teams that want automated IAM hygiene checks. Security/compliance owners who need regular inactivity reports. MSPs managing multiple AWS accounts who need lightweight alerting. How it works / What it does Weekly scheduler – kicks off the workflow (e.g., every Monday 09:00). Get many users – lists IAM users. Get user – enriches each user with details (password status, MFA, etc.). Filter bad data – drops service-linked users or items without usable dates. IAM user inactive for more than 90 days? – keeps users whose last activity is older than 90 days. Last activity is derived from any of: PasswordLastUsed (console sign-in) AccessKeyLastUsed.LastUsedDate (from GetAccessKeyLastUsed if you add it) Fallback to CreateDate if no usage data exists (optional) Send a message (Slack) – posts an alert for each inactive user. No operation – path for users that don’t match (do nothing). How to set up Credentials AWS (Predefined → AWS) Service: iam Region: us-east-1 ← required for IAM Access/Secret (or Assume Role) with read-only IAM perms (see below). Slack OAuth (bot in your target channel). Requirements n8n (current version). AWS IAM permissions** (minimum): iam:ListUsers, iam:GetUser (Optional for higher fidelity) iam:ListAccessKeys, iam:GetAccessKeyLastUsed Slack bot with permission to post in the target channel. Network egress to iam.amazonaws.com. How to customize the workflow Change window:** set 60/120/180 days by adjusting minus(N, 'days'). Audit log:** append results to Google Sheets/DB with UserName, Arn, LastActivity, CheckedAt. Escalation:** if a user remains inactive for another cycle, mention @security or open a ticket. Auto-remediation (advanced):** on a separate approval path, disable access keys or detach policies. Multi-account / multi-region:** iterate a list of AWS credentials (one per account; IAM stays us-east-1). Exclude list:** add a static list or tag-based filter to skip known service users. Notes & gotchas Many users never sign in; if you don’t pull GetAccessKeyLastUsed, they may look “inactive”. Add that call for accuracy. PasswordLastUsed is null if console login never happened. IAM returns timestamps in ISO or epoch—use toDate/toDateTime before comparisons.