by Trung Tran
AWS IAM Inactive User Automation Alert Workflow > Weekly job that finds IAM users with no activity for > 90 days and notifies a Slack channel. > ⚠️ Important: AWS SigV4 for IAM must be scoped to us-east-1. Create the AWS credential in n8n with region us-east-1 (even if your other services run elsewhere). Who’s it for SRE/DevOps teams that want automated IAM hygiene checks. Security/compliance owners who need regular inactivity reports. MSPs managing multiple AWS accounts who need lightweight alerting. How it works / What it does Weekly scheduler – kicks off the workflow (e.g., every Monday 09:00). Get many users – lists IAM users. Get user – enriches each user with details (password status, MFA, etc.). Filter bad data – drops service-linked users or items without usable dates. IAM user inactive for more than 90 days? – keeps users whose last activity is older than 90 days. Last activity is derived from any of: PasswordLastUsed (console sign-in) AccessKeyLastUsed.LastUsedDate (from GetAccessKeyLastUsed if you add it) Fallback to CreateDate if no usage data exists (optional) Send a message (Slack) – posts an alert for each inactive user. No operation – path for users that don’t match (do nothing). How to set up Credentials AWS (Predefined → AWS) Service: iam Region: us-east-1 ← required for IAM Access/Secret (or Assume Role) with read-only IAM perms (see below). Slack OAuth (bot in your target channel). Requirements n8n (current version). AWS IAM permissions** (minimum): iam:ListUsers, iam:GetUser (Optional for higher fidelity) iam:ListAccessKeys, iam:GetAccessKeyLastUsed Slack bot with permission to post in the target channel. Network egress to iam.amazonaws.com. How to customize the workflow Change window:** set 60/120/180 days by adjusting minus(N, 'days'). Audit log:** append results to Google Sheets/DB with UserName, Arn, LastActivity, CheckedAt. Escalation:** if a user remains inactive for another cycle, mention @security or open a ticket. Auto-remediation (advanced):** on a separate approval path, disable access keys or detach policies. Multi-account / multi-region:** iterate a list of AWS credentials (one per account; IAM stays us-east-1). Exclude list:** add a static list or tag-based filter to skip known service users. Notes & gotchas Many users never sign in; if you don’t pull GetAccessKeyLastUsed, they may look “inactive”. Add that call for accuracy. PasswordLastUsed is null if console login never happened. IAM returns timestamps in ISO or epoch—use toDate/toDateTime before comparisons.
by Rahul Joshi
Description: Transform your Jira project management workflow with this intelligent n8n automation template that continuously tracks, scores, and reports the health of Jira Epics. The automation runs every 6 hours, fetches all active Epics, analyzes linked issues for performance, quality, and stability metrics, and automatically flags at-risk Epics. It updates Jira fields, sends alerts to Slack, logs trends in Google Sheets, and syncs visibility with Monday.com—ensuring teams stay proactive, not reactive. Ideal for agile teams, project managers, and product owners looking to monitor delivery health, detect risks early, and maintain transparent reporting across tools. ✅ What This Template Does (Step-by-Step) ⏱ Trigger Every 6 Hours: Automatically executes every six hours to keep health data updated in near real-time. 📥 Fetch All Epics from Jira: Retrieves all Epics, their keys, and fields via the Jira API to establish a full analysis scope. 🔀 Split Epics for Processing: Converts the batch of Epics into individual items, enabling sequential metric analysis. 🔗 Fetch Linked Issues: Collects all issues linked to each Epic, capturing their types, statuses, cycle times, and labels for deeper health analysis. 📈 Calculate Health Score: Computes a weighted score (0–1 scale) based on: • 40% Average Cycle Time • 30% Bug Ratio • 20% Churn (Reopened issues) • 10% Blocker Ratio Scores above 0.6 indicate at-risk Epics. ⚖️ Decision Gate: At-Risk or Healthy: If the health score exceeds 0.6, the workflow automatically initiates corrective actions. 🔧 Update Jira Epic: Updates Jira with the computed health score and adds an “At Risk” label for visibility in dashboards and filters. 🚨 Send Slack Alerts: Notifies the #project-alerts channel with Epic details, health score, and direct Jira links for immediate attention. 📋 Update Monday.com Pulse: Syncs health metrics and risk status back to your Monday board, maintaining cross-platform transparency. 📊 Log to Google Sheets: Appends health score logs with timestamps and Epic keys for trend analysis, audits, and dashboard creation. 🧠 Key Features ✔️ Automated Jira Epic health scoring (cycle time, churn, bugs, blockers) ✔️ Real-time risk flagging with Slack alerts ✔️ Integrated cross-tool visibility (Jira + Monday + Sheets) ✔️ Continuous trend tracking for performance improvement ✔️ Secure API-based automation 💼 Use Cases 💡 Track project delivery health and spot risks early 📈 Build executive dashboards showing team velocity and quality 🤝 Align product and engineering with shared visibility 🧾 Maintain a compliance audit trail of Epic health trends 📦 Required Integrations • Jira Software Cloud API – for Epic and issue data • Slack API – for real-time team alerts • Monday.com API – for visual board updates • Google Sheets API – for historical tracking and analytics 🎯 Why Use This Template? ✅ Prevents project delays by flagging risks early ✅ Provides automated, data-driven Epic health insights ✅ Connects your reporting ecosystem across platforms ✅ Perfect for Agile and DevOps teams driving continuous improvement
by Yaron Been
CDO Agent with Data Analytics Team Description Complete AI-powered data analytics department with a Chief Data Officer (CDO) agent orchestrating specialized data team members for comprehensive data science, business intelligence, and analytics operations. Overview This n8n workflow creates a comprehensive data analytics department using AI agents. The CDO agent analyzes data requests and delegates tasks to specialized agents for data science, business intelligence, data engineering, machine learning, data visualization, and data governance. Features Strategic CDO agent using OpenAI O3 for complex data strategy and decision-making Six specialized data analytics agents powered by GPT-4.1-mini for efficient execution Complete data analytics lifecycle coverage from collection to insights Automated data pipeline management and ETL processes Advanced machine learning model development and deployment Interactive data visualization and business intelligence reporting Comprehensive data governance and compliance frameworks Team Structure CDO Agent**: Data strategy leadership and team delegation (O3 model) Data Scientist Agent**: Statistical analysis, predictive modeling, machine learning algorithms Business Intelligence Analyst Agent**: Business metrics, KPI tracking, performance dashboards Data Engineer Agent**: Data pipelines, ETL processes, data warehousing, infrastructure Machine Learning Engineer Agent**: ML model deployment, MLOps, model monitoring Data Visualization Specialist Agent**: Interactive dashboards, data storytelling, visual analytics Data Governance Specialist Agent**: Data quality, compliance, privacy, governance policies How to Use Import the workflow into your n8n instance Configure OpenAI API credentials for all chat models Deploy the webhook for chat interactions Send data analytics requests via chat (e.g., "Analyze customer churn patterns and create predictive models") The CDO will analyze and delegate to appropriate specialists Receive comprehensive data insights and deliverables Use Cases Predictive Analytics**: Customer behavior analysis, sales forecasting, risk assessment Business Intelligence**: KPI tracking, performance analysis, strategic business insights Data Engineering**: Pipeline automation, data warehousing, real-time data processing Machine Learning**: Model development, deployment, monitoring, and optimization Data Visualization**: Interactive dashboards, executive reporting, data storytelling Data Governance**: Quality assurance, compliance frameworks, data privacy protection Requirements n8n instance with LangChain nodes OpenAI API access (O3 for CDO, GPT-4.1-mini for specialists) Webhook capability for chat interactions Optional: Integration with data platforms and analytics tools Cost Optimization O3 model used only for strategic CDO decisions and complex data strategy GPT-4.1-mini provides 90% cost reduction for specialist data tasks Parallel processing enables simultaneous agent execution Template libraries reduce redundant analytics development work Integration Options Connect to data platforms (Snowflake, BigQuery, Redshift, Databricks) Integrate with BI tools (Tableau, Power BI, Looker, Grafana) Link to ML platforms (AWS SageMaker, Azure ML, Google AI Platform) Export to business applications and reporting systems Disclaimer: This workflow is provided as a building block for your automation needs. Please review and customize the agents, prompts, and connections according to your specific data analytics requirements and organizational structure. Contact & Resources Website**: nofluff.online YouTube**: @YaronBeen LinkedIn**: Yaron Been Tags #DataAnalytics #DataScience #BusinessIntelligence #MachineLearning #DataEngineering #DataVisualization #DataGovernance #PredictiveAnalytics #BigData #DataDriven #DataStrategy #AnalyticsAutomation #DataPipelines #MLOps #DataQuality #BusinessMetrics #KPITracking #DataInsights #AdvancedAnalytics #n8n #OpenAI #MultiAgentSystem #DataTeam #AnalyticsWorkflow #DataOperations
by Rahul Joshi
Description Keep your internal knowledge base fresh and reliable with this automated FAQ freshness monitoring system. 🧠📅 This workflow tracks FAQ update dates in Notion, calculates SLA compliance, logs results in Google Sheets, and sends Slack alerts for outdated items. Perfect for documentation teams ensuring content accuracy and operational visibility across platforms. 🚀💬 What This Template Does 1️⃣ Triggers every Monday at 10:00 AM to start freshness checks. ⏰ 2️⃣ Fetches FAQ entries from your Notion database. 📚 3️⃣ Computes SLA status based on the last edited date (30-day threshold). 📆 4️⃣ Updates a Google Sheet with current FAQ details and freshness status. 📊 5️⃣ Filters out overdue FAQs that need review. 🔍 6️⃣ Aggregates all overdue items into one report. 🧾 7️⃣ Sends a consolidated Slack alert with direct Notion links and priority tags. 💬 Key Benefits ✅ Maintains documentation freshness across systems. ✅ Reduces support friction from outdated FAQs. ✅ Centralizes visibility with Google Sheets reporting. ✅ Notifies your team in real time via Slack. ✅ Enables SLA-based documentation governance. Features Weekly automated schedule (every Monday at 10 AM). Notion database integration for FAQ retrieval. SLA computation and overdue filtering logic. Google Sheets sync for audit logging. Slack notification for overdue FAQ alerts. Fully configurable thresholds and alerting logic. Requirements Notion API credentials with database read access. Google Sheets OAuth2 credentials with edit access. Slack Bot Token with chat:write permission. Environment variables : NOTION_FAQ_DATABASE_ID GOOGLE_SHEET_FAQ_ID SLACK_FAQ_ALERT_CHANNEL_ID Target Audience Knowledge management and documentation teams 🧾 SaaS product teams maintaining FAQ accuracy 💡 Support operations and customer success teams 💬 QA and compliance teams monitoring SLA adherence 📅 Step-by-Step Setup Instructions 1️⃣ Connect Notion credentials and set your FAQ database ID. 2️⃣ Create a Google Sheet with required headers (Title, lastEdited, slaStatus, etc.). 3️⃣ Add your Slack credentials and specify the alert channel ID. 4️⃣ Configure the cron schedule (0 10 * * 1) for Monday 10:00 AM checks. 5️⃣ Run once manually to verify credentials and mappings. 6️⃣ Activate for ongoing weekly freshness monitoring. ✅
by Rahul Joshi
Description This workflow automates the process of retrieving Stripe invoices, validating API responses, generating payment receipts, sending them via email, storing PDFs in Google Drive, and appending details to a Google Sheet ledger. It also includes an error logging system to capture and record workflow issues, ensuring financial operations are both automated and reliable. What This Template Does (Step-by-Step) 📋 Manual Trigger – Start the workflow manually by clicking Execute workflow. 🔗 Fetch Invoices – Authenticates with Stripe and retrieves the 5 most recent invoices (includes customer info, amounts, statuses, and invoice URLs). ✅ Check API Response – Ensures the Stripe API response contains a valid data[] array. If not, errors are logged. 📂 Expand List – Splits Stripe’s bundled invoice list into individual invoice records for independent processing. 💳 IF (Paid?) – Routes invoices based on payment status; only paid invoices move forward. 📧 IF (Already Receipted?) – Skips invoices where a receipt has already been generated (receipt_sent = true). 📑 Download File – Downloads the hosted invoice PDF from Stripe for use in emails and archiving. ✉️ Send Receipt Email – Emails the customer a payment receipt with the PDF attached, using invoice details (number, amount, customer name). ☁️ Upload Invoice PDF – Uploads the invoice PDF to a specific Google Drive folder, named by invoice number. 📊 Append to Ledger – Updates a Google Sheet with invoice metadata (date, invoice number, Drive file ID, link, size). ⚠️ Error Logging – Logs workflow issues (failed API calls, missing data, etc.) into a dedicated error tracking sheet. Prerequisites Stripe API key (with invoice read permissions) Google Drive (destination folder for invoices) Google Sheets with: Receipts Ledger Sheet Error Logging Sheet Gmail OAuth2 account for sending receipts Key Benefits ✅ Automates customer receipt delivery with attached PDFs ✅ Builds a permanent ledger in Google Sheets for finance ✅ Archives invoices in Google Drive for easy retrieval ✅ Prevents duplicates by checking receipt_sent metadata ✅ Includes error logging for smooth monitoring and debugging Perfect For Finance/accounting teams needing automated receipt handling SaaS businesses managing recurring Stripe invoices Operations teams requiring error-proof automation Any business needing audit-ready receipts + logs
by Evoort Solutions
🔗 Automated Semrush Backlink Checker with n8n and Google Sheets 📘 Description This n8n workflow automates backlink data extraction using the Semrush Backlink Checker API available on RapidAPI. By submitting a website via a simple form, the workflow fetches both backlink overview metrics and detailed backlink entries, saving the results directly into a connected Google Sheet. This is an ideal solution for SEO professionals who want fast, automated insights without logging into multiple tools. 🧩 Node-by-Node Explanation On form submission** – Starts the workflow when a user submits a website URL through a web form. HTTP Request* – Sends the URL to the *Semrush Backlink Checker API** using a POST request with headers and form data. Reformat 1** – Extracts high-level backlink overview data like total backlinks and referring domains. Reformat 2** – Extracts individual backlink records such as source URLs, anchors, and metrics. Backlink overview** – Appends overview metrics into the "backlink overflow" tab of a Google Sheet. Backlinks** – Appends detailed backlink data into the main "backlinks" tab of the same Google Sheet. ✅ Benefits of This Workflow No-code integration**: Built entirely within n8n—no scripting required. Time-saving automation**: Eliminates the need to manually log in or export reports from Semrush. Centralized results**: All backlink data is organized in Google Sheets for easy access and sharing. Powered by RapidAPI: Uses the **Semrush Backlink Checker API hosted on RapidAPI for fast, reliable access. Easily extendable**: Can be enhanced with notifications, dashboards, or additional data enrichment. 🛠️ Use Cases 📊 SEO Audit Automation – Auto-generate backlink insights for multiple websites via form submissions. 🧾 Client Reporting – Streamline backlink reporting for SEO agencies or consultants. 📥 Lead Capture Tool – Offer a free backlink analysis tool on your site to capture leads while showcasing value. 🔁 Scheduled Backlink Monitoring – Modify the trigger to run on a schedule for recurring reports. 📈 Campaign Tracking – Monitor backlinks earned during content marketing or digital PR campaigns. 🔐 How to Get Your API Key for the Competitor Keyword Analysis API Go to 👉 Semrush Backlink Checker API - RapidAPI Click "Subscribe to Test" (you may need to sign up or log in). Choose a pricing plan (there’s a free tier for testing). After subscribing, click on the "Endpoints" tab. Your API Key will be visible in the "x-rapidapi-key" header. 🔑 Copy and paste this key into the httpRequest node in your workflow. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n Save time, stay consistent, and grow your LinkedIn presence effortlessly!
by Marth
How It Works: The 5-Node Monitoring Flow This concise workflow efficiently captures, filters, and delivers crucial cybersecurity-related mentions. 1. Monitor: Cybersecurity Keywords (X/Twitter Trigger) This is the entry point of your workflow. It actively searches X (formerly Twitter) for tweets containing the specific keywords you define. Function:** Continuously polls X for tweets that match your specified queries (e.g., your company name, "Log4j," "CVE-2024-XXXX," "ransomware"). Process:** As soon as a matching tweet is found, it triggers the workflow to begin processing that information. 2. Format Notification (Code Node) This node prepares the raw tweet data, transforming it into a clean, actionable message for your alerts. Function:** Extracts key details from the raw tweet and structures them into a clear, concise message. Process:** It pulls out the tweet's text, the user's handle (@screen_name), and the direct URL to the tweet. These pieces are then combined into a user-friendly notificationMessage. You can also include basic filtering logic here if needed. 3. Valid Mention? (If Node) This node acts as a quick filter to help reduce noise and prevent irrelevant alerts from reaching your team. Function:** Serves as a simple conditional check to validate the mention's relevance. Process:** It evaluates the notificationMessage against specific criteria (e.g., ensuring it doesn't contain common spam words like "bot"). If the mention passes this basic validation, the workflow continues. Otherwise, it quietly ends for that particular tweet. 4. Send Notification (Slack Node) This is the delivery mechanism for your alerts, ensuring your team receives instant, visible notifications. Function:** Delivers the formatted alert message directly to your designated communication channel. Process:* The notificationMessage is sent straight to your specified *Slack channel** (e.g., #cyber-alerts or #security-ops). 5. End Workflow (No-Op Node) This node simply marks the successful completion of the workflow's execution path. Function:** Indicates the end of the workflow's process for a given trigger. How to Set Up Implementing this simple cybersecurity monitor in your n8n instance is quick and straightforward. 1. Prepare Your Credentials Before building the workflow, ensure all necessary accounts are set up and their respective credentials are ready for n8n. X (Twitter) API:* You'll need an X (Twitter) developer account to create an application and obtain your Consumer Key/Secret and Access Token/Secret. Use these to set up your *Twitter credential** in n8n. Slack API:* Set up your *Slack credential* in n8n. You'll also need the *Channel ID** of the Slack channel where you want your security alerts to be posted (e.g., #security-alerts or #it-ops). 2. Import the Workflow JSON Get the workflow structure into your n8n instance. Import:** In your n8n instance, go to the "Workflows" section. Click the "New" or "+" icon, then select "Import from JSON." Paste the provided JSON code (from the previous response) into the import dialog and import the workflow. 3. Configure the Nodes Customize the imported workflow to fit your specific monitoring needs. Monitor: Cybersecurity Keywords (X/Twitter):** Click on this node. Select your newly created Twitter Credential. CRITICAL: Modify the "Query" parameter to include your specific brand names, relevant CVEs, or general cybersecurity terms. For example: "YourCompany" OR "CVE-2024-1234" OR "phishing alert". Use OR to combine multiple terms. Send Notification (Slack):** Click on this node. Select your Slack Credential. Replace "YOUR_SLACK_CHANNEL_ID" with the actual Channel ID you noted earlier for your security alerts. (Optional: You can adjust the "Valid Mention?" node's condition if you find specific patterns of false positives in your search results that you want to filter out.) 4. Test and Activate Verify that your workflow is working correctly before setting it live. Manual Test:** Click the "Test Workflow" button (usually in the top right corner of the n8n editor). This will execute the workflow once. Verify Output:** Check your specified Slack channel to confirm that any detected mentions are sent as notifications in the correct format. If no matching tweets are found, you won't see a notification, which is expected. Activate:** Once you're satisfied with the test results, toggle the "Active" switch (usually in the top right corner of the editor) to ON. Your workflow will now automatically monitor X (Twitter) at the specified polling interval.
by Avkash Kakdiya
How it works This workflow automates LinkedIn community engagement by monitoring post comments, filtering new ones, generating AI-powered replies, and posting responses directly on LinkedIn. It also logs all interactions into Google Sheets for tracking and analytics. Step-by-step Trigger & Fetch A Schedule Trigger runs the workflow every 10 minutes. The workflow fetches the latest comments on a specific LinkedIn post using LinkedIn’s API with token-based authentication. Filter for New Comments Retrieves the timestamp of the last processed comment from Google Sheets. Filters out previously handled comments, ensuring only fresh interactions are processed. AI-Powered Reply Generation Sends the new comment to OpenAI GPT-3.5 Turbo with a structured prompt. AI generates a professional, concise, and engaging LinkedIn-appropriate reply (max 2–3 sentences). Post Back to LinkedIn Automatically posts the AI-generated reply under the original comment thread. Maintains consistent formatting and actor identity. Data Logging Appends the original comment, AI response, and metadata into Google Sheets. Enables tracking, review, and future engagement analysis. Benefits Saves time by automating LinkedIn comment replies. Ensures responses are timely, professional, and on-brand. Maintains authentic engagement without manual effort. Prevents duplicate replies by filtering with timestamps. Creates a structured log in Google Sheets for auditing and analytics.
by M Sayed
Automate Egyptian gold and currency price monitoring with beautiful Telegram notifications! 🚀 This workflow scrapes live gold prices and official exchange rates from the Egyptian market every hour and sends professionally formatted updates to your Telegram channel/group. ✨ Features: 🕐 Smart Scheduling: Runs hourly between 10 AM - 10 PM (Cairo timezone) 🥇 Gold Prices: Tracks different gold types with buy/sell rates 💱 Currency Rates: Official exchange rates (USD, EUR, SAR, AED, GBP, etc.) 🎨 Beautiful Formatting: Emoji-rich messages with proper Arabic text formatting ⚡ Reliable: Built-in retry mechanisms and error handling 🇪🇬 Localized: Tailored specifically for the Egyptian market
by Trung Tran
AI-Powered AWS S3 Manager with Audit Logging in n8n (Slack/ChatOps Workflow) > This n8n workflow empowers users to manage AWS S3 buckets and files using natural language via Slack or chat platforms. Equipped with an OpenAI-powered Agent and integrated audit logging to Google Sheets, it supports operations like listing buckets, copying/deleting files, managing folders, and automatically records every action for compliance and traceability. 👥 Who’s it for This workflow is built for: DevOps engineers who want to manage AWS S3 using natural chat commands. Technical support teams interacting with AWS via Slack, Telegram, etc. Automation engineers building ChatOps tools. Organizations that require audit logs for every cloud operation. Users don’t need AWS Console or CLI access — just send a message like “Copy file from dev to prod”. ⚙️ How it works / What it does This workflow turns natural chat input into automated AWS S3 actions using an OpenAI-powered AI Agent in n8n. 🔁 Workflow Overview: Trigger: A user sends a message in Slack, Telegram, etc. AI Agent: Interprets the message Calls one of 6 S3 tools: ListBuckets ListObjects CopyObject DeleteObject ListFolders CreateFolder S3 Action: Performs the requested AWS S3 operation. Audit Log: Logs the tool call to Google Sheets using AddAuditLog: Includes timestamp, tool used, parameters, prompt, reasoning, and user info. 🛠️ How to set up Step-by-step Setup: Webhook Trigger Slack, Telegram, or custom chat platform → connects to n8n. OpenAI Agent Model: gpt-4 or gpt-3.5-turbo Memory: Simple Memory Node Prompt: Instructs agent to always follow tool calls with an AddAuditLog call. AWS S3 Nodes Configure each tool with AWS credentials. Tools: getAll: bucket getAll: file copy: file delete: file getAll: folder create: folder Google Sheets Node Sheet: AWS S3 Audit Logs Operation: Append or Update Row Columns (must match input keys): timestamp, tool, status, chat_prompt, parameters, user_name, tool_call_reasoning Agent Tool Definitions Include AddAuditLog as a 7th tool. Agent calls it immediately after every S3 action (except when logging itself). ✅ Requirements [ ] n8n instance with AI Agent feature [ ] OpenAI API Key [ ] AWS IAM user with S3 access [ ] Google Sheet with required columns [ ] Chat integration (Slack, Telegram, etc.) 🧩 How to customize the workflow | Feature | Customization Tip | |----------------------|--------------------------------------------------------------| | 🌎 Multi-region S3 | Let users include region in the message or agent memory | | 🛡️ Restricted actions| Use memory/user ID to limit delete/copy actions | | 📁 Folder filtering | Extend ListObjects with prefix/suffix filters | | 📤 Upload file | Add PutObject with pre-signed URL support | | 🧾 Extra logging | Add IP, latency, error trace to audit logs | | 📊 Reporting | Link Google Sheet to Looker Studio for audit dashboards | | 🚨 Security alerts | Notify via Slack/Email when DeleteObject is triggered |
by Vigh Sandor
PKI Certificate & CRL Monitor - Auto Expiration Alert System Overview This n8n workflow provides automated monitoring of Public Key Infrastructure (PKI) components including CA certificates, Certificate Revocation Lists (CRLs), and associated web services. It extracts certificate information from the TSL (Trusted Service List) -- the Hungarian is the example list as default in the workflow -- , monitors expiration dates, and sends alerts via Telegram and SMS when critical thresholds are reached. Features Automated extraction of certificate URLs from TSL XML CA certificate expiration monitoring CRL expiration tracking Website availability monitoring with retry mechanism Multi-channel alerting (Telegram and SMS) Scheduled execution every 12 hours 17-hour warning threshold for expirations Setup Instructions Prerequisites n8n Instance: Running n8n installation with Linux environment Telegram Bot: Created via @BotFather Textbelt API Key: For SMS notifications (optional) Network Access: To reach TSL source and certificate URLs Linux Tools: OpenSSL, curl, libxml2-utils, jq (auto-installed) Configuration Steps 1. Telegram Setup Create Telegram Bot: Open Telegram and search for @BotFather Send /newbot and follow prompts Save the bot token (format: 1234567890:ABCdefGHIjklMNOpqrsTUVwxyz) Create Alert Channel: Create a new Telegram channel for alerts Add your bot as administrator Get channel ID: Send a test message to the channel Visit: https://api.telegram.org/bot<YOUR_BOT_TOKEN>/getUpdates Find "chat":{"id":-100XXXXXXXXXX} - this is your channel ID 2. SMS Setup (Optional) Textbelt Configuration: Register at https://textbelt.com Purchase credits and obtain API key Note: Free tier allows 1 SMS/day for testing 3. Configure Alert Nodes Update these nodes with your credentials: CRL Alert Node: Open CRL Alert --- Telegram & SMS node Replace YOUR-TELEGRAM-BOT-TOKEN with your bot token Replace YOUR-TELEGRAM-CHANNEL-ID with your channel ID Replace +36301234567 with target phone number(s) Replace YOUR-TEXTBELT-API-KEY with your Textbelt key CA Alert Node: Open CA Alert --- Telegram & SMS node Apply same replacements as above Website Down Alert Node: Open Send Website Down - Telegram & SMS node Apply same replacements as above 4. TSL Source Configuration The workflow defaults to Hungarian TSL: URL: http://www.nmhh.hu/tl/pub/HU_TL.xml To change, edit the Collect Checking URL list node Trust list references: https://ec.europa.eu/tools/lotl/eu-lotl.xml (to find more TSL list to change the default), and https://www.etsi.org/deliver/etsi_ts/119600_119699/119615/01.02.01_60/ts_119615v010201p.pdf (to Technical Specification of the Trust Lists) 5. Threshold Configuration Default warning threshold: 17 hours before expiration To modify CRL threshold: Edit nextUpdate - TimeFilter node To modify CA threshold: Edit nextUpdate - TimeFilter1 node Change value in condition: if (diffHours < 17) Activation Save all configuration changes Test with Execute With Manual Start trigger Verify alerts are received Toggle workflow to Active status for scheduled operation How to Use Automatic Operation Once activated, the workflow runs automatically: Frequency**: Every 12 hours Process**: Downloads TSL XML Extracts all certificate URLs Checks each URL type (CRL, CA, or other) Validates expiration dates Sends alerts for critical items Manual Execution For immediate checks: Open the workflow Click Execute With Manual Start node Click "Execute Node" Monitor execution progress Understanding Alerts CRL Expiration Alert Message Format: ALERT! with [Issuer CN] !!!CRL EXPIRATION!!! Will be under 17 hour ([Next Update Time])! Last updated: [Last Update Time] Trigger Conditions: CRL expires in less than 17 hours CRL download successful but expiration imminent CA Certificate Alert Message Format: ALERT!/EXPIRED! with [Subject CN] !!!CA EXPIRATION PROBLEM!!! The expiration time: ([Not After Date]) Last updated: ([Not Before Date]) Trigger Conditions: Certificate expires in less than 17 hours (ALERT!) Certificate already expired (EXPIRED!) Website Down Alert Message Format: ALERT! The [URL] !!!NOT AVAILABLE!!! Service outage probable! Intervention required! Trigger Conditions: Initial HTTP request fails Retry after wait period also fails HTTP status code not 200 Monitoring Dashboard Execution History Navigate to n8n Executions tab Filter by workflow name Review successful/failed runs Alert History Check Telegram channel for: Alert timestamps Affected certificates/services Expiration details Troubleshooting No Alerts Received Check Telegram Bot: Verify bot is admin in channel Test with manual message via API Confirm channel ID is correct Check Workflow Execution: Review execution logs in n8n Look for error nodes (red indicators) Verify TSL URL is accessible False Positives Verify system time is correct Check timezone settings Review threshold values Missing Certificates Some certificates may not have URLs TSL may be temporarily unavailable Check XML parsing in logs Performance Issues Slow Execution: Large TSL files take time to parse Network latency affects URL checks Consider increasing timeout values Memory Issues: Workflow processes many URLs sequentially Monitor n8n server resources Consider increasing batch intervals Advanced Configuration Modify Check Frequency Edit Execute With Scheduled Start node: Change interval type (hours/days/weeks) Adjust interval value Consider peak/off-peak scheduling Add Custom TSL Sources In Collect Checking URL list node: URL="https://your-tsl-source.com/tsl.xml" Customize Alert Messages Edit alert nodes to modify message templates: Add organization name Include escalation contacts Add remediation instructions Filter Certificate Types Modify URL detection patterns: Is this CRL?** node: Adjust CRL detection Is this CA?** node: Adjust CA detection Add new patterns as needed Adjust Retry Logic Wait B4 Retry node: Default: Immediate retry Can add delay (seconds/minutes) Useful for transient network issues Maintenance Regular Tasks Weekly**: Review alert frequency Monthly**: Validate phone numbers/channels Quarterly**: Update TSL source URLs Annually**: Review threshold values Log Management Clear old execution logs periodically Archive alert history from Telegram Document false positives for tuning Updates Keep n8n updated for security patches Monitor OpenSSL versions for compatibility Update notification service APIs as needed Security Considerations Store API keys in n8n credentials manager Use environment variables for sensitive data Restrict workflow edit access Monitor for unauthorized changes Regularly rotate API keys Use HTTPS for TSL sources when available Compliance Notes Ensure monitoring aligns with PKI policies Document alert response procedures Maintain audit trail of certificate issues Consider regulatory requirements for uptime Integration Options Connect to ticketing systems for alert tracking Add database logging for compliance Integrate with monitoring dashboards Create escalation workflows for critical alerts Best Practices Test alerts monthly to ensure delivery Maintain multiple notification channels Document response procedures for each alert type Set up redundant monitoring if critical Review and tune thresholds based on operational needs Keep contact lists updated Consider time zones for global operations
by Kamran habib
| N8N Workflow | AI Reddit Problem Detection & Auto-Solution Commenter 🤖 This n8n workflow automates Reddit community engagement by detecting posts that discuss problems and automatically replying with AI-generated solutions — powered by Google Gemini. It’s designed for developers, automation creators, and brands who want to provide helpful, automated responses to Reddit users discussing issues in their niche. How It Works The workflow starts with a Manual Trigger (When clicking ‘Execute workflow’). Search for a Post: It scans the r/n8n subreddit (or any subreddit you set) for recent posts containing the keyword “Why I stopped using”. Filter Posts (If Node): Filters posts that have 2 or more upvotes and non-empty text, ensuring only quality discussions are analyzed. Edit Fields: Extracts post details such as title, body text, upvotes, creation time, and subreddit ID for AI processing. AI Agent + Google Gemini Chat Model: The first AI node analyzes the post and decides whether it’s describing a problem or frustration related to AI automation. Gemini responds with “Yes” or “No.” Conditional Branch (If1 Node): If “Yes,” the post is confirmed as discussing a problem. The workflow then triggers the second AI Agent. AI Agent 2 + Gemini: The second AI node uses Gemini to generate a helpful and concise solution addressing the issue mentioned in the Reddit post (for example, offering a fix, suggestion, or new idea). Merge & Log Data: The AI’s findings (post details + solution) are merged and saved into a connected Google Sheet for tracking community insights. Comment on Reddit: The workflow automatically posts the AI-generated solution as a comment reply on the original Reddit thread, engaging users directly. How To Use Import the provided JSON workflow into your n8n dashboard. Set up the required credentials: Reddit OAuth2 API – for searching and posting comments. Google Gemini (PaLM) API – for AI text analysis and solution generation. Google Sheets API – for logging post data and AI results. Adjust the subreddit name, search keyword, or prompts to fit your niche. Click Execute Workflow to run the automation. Requirements Reddit Developer Account (OAuth2 credentials). Google Gemini (PaLM) API account for AI processing. Google Sheets account for saving analysis results. How To Customize Change the search keyword (e.g., “help with automation,” “issue with API,” etc.). Modify the AI prompts to tailor the solution style (technical, friendly, educational, etc.). Edit the Google Sheet fields to capture more or fewer details. Enable/disable the comment node if you want to manually approve replies before posting. Adjust the Gemini model name (e.g., models/gemini-2.0-flash) or parameters for faster or more creative responses.