by Vigh Sandor
PKI Certificate & CRL Monitor - Auto Expiration Alert System Overview This n8n workflow provides automated monitoring of Public Key Infrastructure (PKI) components including CA certificates, Certificate Revocation Lists (CRLs), and associated web services. It extracts certificate information from the TSL (Trusted Service List) -- the Hungarian is the example list as default in the workflow -- , monitors expiration dates, and sends alerts via Telegram and SMS when critical thresholds are reached. Features Automated extraction of certificate URLs from TSL XML CA certificate expiration monitoring CRL expiration tracking Website availability monitoring with retry mechanism Multi-channel alerting (Telegram and SMS) Scheduled execution every 12 hours 17-hour warning threshold for expirations Setup Instructions Prerequisites n8n Instance: Running n8n installation with Linux environment Telegram Bot: Created via @BotFather Textbelt API Key: For SMS notifications (optional) Network Access: To reach TSL source and certificate URLs Linux Tools: OpenSSL, curl, libxml2-utils, jq (auto-installed) Configuration Steps 1. Telegram Setup Create Telegram Bot: Open Telegram and search for @BotFather Send /newbot and follow prompts Save the bot token (format: 1234567890:ABCdefGHIjklMNOpqrsTUVwxyz) Create Alert Channel: Create a new Telegram channel for alerts Add your bot as administrator Get channel ID: Send a test message to the channel Visit: https://api.telegram.org/bot<YOUR_BOT_TOKEN>/getUpdates Find "chat":{"id":-100XXXXXXXXXX} - this is your channel ID 2. SMS Setup (Optional) Textbelt Configuration: Register at https://textbelt.com Purchase credits and obtain API key Note: Free tier allows 1 SMS/day for testing 3. Configure Alert Nodes Update these nodes with your credentials: CRL Alert Node: Open CRL Alert --- Telegram & SMS node Replace YOUR-TELEGRAM-BOT-TOKEN with your bot token Replace YOUR-TELEGRAM-CHANNEL-ID with your channel ID Replace +36301234567 with target phone number(s) Replace YOUR-TEXTBELT-API-KEY with your Textbelt key CA Alert Node: Open CA Alert --- Telegram & SMS node Apply same replacements as above Website Down Alert Node: Open Send Website Down - Telegram & SMS node Apply same replacements as above 4. TSL Source Configuration The workflow defaults to Hungarian TSL: URL: http://www.nmhh.hu/tl/pub/HU_TL.xml To change, edit the Collect Checking URL list node Trust list references: https://ec.europa.eu/tools/lotl/eu-lotl.xml (to find more TSL list to change the default), and https://www.etsi.org/deliver/etsi_ts/119600_119699/119615/01.02.01_60/ts_119615v010201p.pdf (to Technical Specification of the Trust Lists) 5. Threshold Configuration Default warning threshold: 17 hours before expiration To modify CRL threshold: Edit nextUpdate - TimeFilter node To modify CA threshold: Edit nextUpdate - TimeFilter1 node Change value in condition: if (diffHours < 17) Activation Save all configuration changes Test with Execute With Manual Start trigger Verify alerts are received Toggle workflow to Active status for scheduled operation How to Use Automatic Operation Once activated, the workflow runs automatically: Frequency**: Every 12 hours Process**: Downloads TSL XML Extracts all certificate URLs Checks each URL type (CRL, CA, or other) Validates expiration dates Sends alerts for critical items Manual Execution For immediate checks: Open the workflow Click Execute With Manual Start node Click "Execute Node" Monitor execution progress Understanding Alerts CRL Expiration Alert Message Format: ALERT! with [Issuer CN] !!!CRL EXPIRATION!!! Will be under 17 hour ([Next Update Time])! Last updated: [Last Update Time] Trigger Conditions: CRL expires in less than 17 hours CRL download successful but expiration imminent CA Certificate Alert Message Format: ALERT!/EXPIRED! with [Subject CN] !!!CA EXPIRATION PROBLEM!!! The expiration time: ([Not After Date]) Last updated: ([Not Before Date]) Trigger Conditions: Certificate expires in less than 17 hours (ALERT!) Certificate already expired (EXPIRED!) Website Down Alert Message Format: ALERT! The [URL] !!!NOT AVAILABLE!!! Service outage probable! Intervention required! Trigger Conditions: Initial HTTP request fails Retry after wait period also fails HTTP status code not 200 Monitoring Dashboard Execution History Navigate to n8n Executions tab Filter by workflow name Review successful/failed runs Alert History Check Telegram channel for: Alert timestamps Affected certificates/services Expiration details Troubleshooting No Alerts Received Check Telegram Bot: Verify bot is admin in channel Test with manual message via API Confirm channel ID is correct Check Workflow Execution: Review execution logs in n8n Look for error nodes (red indicators) Verify TSL URL is accessible False Positives Verify system time is correct Check timezone settings Review threshold values Missing Certificates Some certificates may not have URLs TSL may be temporarily unavailable Check XML parsing in logs Performance Issues Slow Execution: Large TSL files take time to parse Network latency affects URL checks Consider increasing timeout values Memory Issues: Workflow processes many URLs sequentially Monitor n8n server resources Consider increasing batch intervals Advanced Configuration Modify Check Frequency Edit Execute With Scheduled Start node: Change interval type (hours/days/weeks) Adjust interval value Consider peak/off-peak scheduling Add Custom TSL Sources In Collect Checking URL list node: URL="https://your-tsl-source.com/tsl.xml" Customize Alert Messages Edit alert nodes to modify message templates: Add organization name Include escalation contacts Add remediation instructions Filter Certificate Types Modify URL detection patterns: Is this CRL?** node: Adjust CRL detection Is this CA?** node: Adjust CA detection Add new patterns as needed Adjust Retry Logic Wait B4 Retry node: Default: Immediate retry Can add delay (seconds/minutes) Useful for transient network issues Maintenance Regular Tasks Weekly**: Review alert frequency Monthly**: Validate phone numbers/channels Quarterly**: Update TSL source URLs Annually**: Review threshold values Log Management Clear old execution logs periodically Archive alert history from Telegram Document false positives for tuning Updates Keep n8n updated for security patches Monitor OpenSSL versions for compatibility Update notification service APIs as needed Security Considerations Store API keys in n8n credentials manager Use environment variables for sensitive data Restrict workflow edit access Monitor for unauthorized changes Regularly rotate API keys Use HTTPS for TSL sources when available Compliance Notes Ensure monitoring aligns with PKI policies Document alert response procedures Maintain audit trail of certificate issues Consider regulatory requirements for uptime Integration Options Connect to ticketing systems for alert tracking Add database logging for compliance Integrate with monitoring dashboards Create escalation workflows for critical alerts Best Practices Test alerts monthly to ensure delivery Maintain multiple notification channels Document response procedures for each alert type Set up redundant monitoring if critical Review and tune thresholds based on operational needs Keep contact lists updated Consider time zones for global operations
by Roni Bandini
How it works This template waits for an external button to be pressed via webhook, then reads a Google Sheet with pending shipments. The sheet contains the columns: idEnvio, fechaOrden, nombre, direccion, detalle, and enviado. It determines the next shipment using Google Gemini Flash 2.5, considering not only the date but also the customer’s comments. Once the next shipment is selected, the column “enviado” is updated with an X, and the shipping information is forwarded to Unihiker’s n8n Terminal. Setup Create a new Google Sheet and name it "Shipping". Add the following column headers in the first row: idEnvio, fechaOrden, nombre, direccion, detalle, and enviado. Connect your Google Sheets and Google Gemini credentials. In your n8n workflow, select the Shipping sheet in the Google Sheets node. Copy the webhook URL and paste it into the .ino code for your Unihiker n8n Terminal. 🚀
by Patrick Jennings
Sleeper NFL Draft Results to Telegram Easily retrieve and send your Sleeper fantasy football draft picks to Telegram with this plug-and-play n8n workflow template. What This Workflow Does This workflow allows you to: Accept a /team {username} command via Telegram Use the Sleeper API to: Get the user’s ID from their username Find the most recent NFL draft associated with that user Fetch all the picks made in that draft Filter only the picks made by that user Format the data into a readable message Send back a Telegram message with full pick results including: Round, draft slot, overall pick Player name, position, and team Requirements Sleeper Fantasy Football account** with at least one completed draft Telegram Bot** created via BotFather n8n instance** with: Telegram Trigger credentials set up Access to external HTTP requests (Sleeper API) Setup Instructions Import the Template into your n8n instance. Add Telegram Credentials: Go to Credentials > Telegram API Add your bot token Replace REPLACE_WITH_YOUR_TELEGRAMAPI_CREDENTIAL in the workflow Customize: Optional: Modify the /team command trigger Optional: Adjust formatting of the Telegram message Example Telegram Response Your draft results From 2024 Your City Here (dynastyppr) season! Here are your picks: • Round 1, Pick 4: (4 overall) Christian McCaffrey (RB - SF) • Round 2, Pick 21: (21 overall) Garrett Wilson (WR - NYJ) • Round 3, Pick 28: (28 overall) Travis Etienne (RB - JAX) Notes This workflow defaults to the first Sleeper league/draft returned — you can enhance logic to let users select from multiple leagues. Draft year is hardcoded to 2024. Update for future seasons as needed. Does not require Airtable or Google Sheets.
by Kevin Armbruster
Automatically add Travel time blockers before Appointments This bot automatically adds Travel time blockers to your calendar, so you never come late to an appointment again. How it works Trigger**: The workflow is initiated daily at 7 AM by a "Schedule Trigger". AI Agent**: An "AI Agent" node orchestrates the main logic. Fetch events**: It uses the get_calendar_events tool to retrieve all events scheduled for the current day. Identify events with location**: It then filters these events to identify those that have a specified location. Check for existing travel time Blockers*: For each event with a location, it checks if a Travel time blocker already exists. Events that *do not have such a blocker are marked for processing. Calculate travel time: Using the Google Directions API it determines how lot it takes to get to the location of the event. The starting location is by default your **Home Address, unless there is a previous event within 2 hours before the event, in which case it will use the location of that previous event. Create Travel time blocker**: Finally, it uses the create_calendar_event tool to create the Travel time blocker with a duration equal to the calculated travel time + 10 minutes for buffer. Set up steps Set Variables Home address Blocker name Mode of Transportation Connect your LLM Provider Connect your Google Calendar Connect your Google Directions API
by Denis
What this workflow does Complete Airtable database management system using MCP (Model Context Protocol) for AI agents. Create bases, tables with complex field types, manage records, and maintain state with Redis storage. Setup steps Add your Airtable Personal Access Token to credentials Configure Redis connection for ID storage Get your workspace ID from Airtable (starts with wsp...) Connect to MCP Server Trigger Configure your AI agent with the provided instructions Key features Create new Airtable bases and custom tables Support for all field types (date, number, select, etc.) Full CRUD operations on records Rename tables and fields Store base/workspace IDs to avoid repeated requests Generic operations work with ANY Airtable structure Included operations create_base, create_custom_table, add_field get_table_ids, get_existing_records update_record, rename_table, rename_fields delete_record get/set base_id and workspace_id (Redis storage) Notes Check sticky notes in workflow for ID locations and field type requirements.
by Guillaume Duvernay
Stop duplicating your work! This template demonstrates a powerful design pattern to handle multiple triggers (e.g., Form, Webhook, Sub-workflow) within a single, unified workflow. By using a "normalize and consolidate" technique, your core logic becomes independent of the trigger that started it, making your automations cleaner, more scalable, and far easier to maintain. Who is this for? n8n developers & architects:** Build robust, enterprise-grade workflows that are easy to maintain. Automation specialists:** Integrate the same core process with multiple external systems without repeating yourself. Anyone who values clean design:** Apply the DRY (Don't Repeat Yourself) principle to your automations. What problem does this solve? Reduces duplication:** Avoids creating near-identical workflows for each trigger source. Simplifies maintenance:** Update your core logic in one place, not across multiple workflows. Improves scalability:** Easily add new triggers without altering the core processing logic. Enhances readability:** A clear separation of data intake from core logic makes workflows easier to understand. How it works (The "Normalize & Consolidate" Pattern) Trigger: The workflow starts from one of several possible entry points, each with a unique data structure. Normalize: Each trigger path immediately flows into a dedicated Set node. This node acts as an adapter, reformatting the unique data into a standardized schema with consistent key names (e.g., mapping body.feedback to feedback). Consolidate: All "normalize" nodes connect to a single Set node. This node uses the generic {{ $json.key_name }} expression to accept the standardized data from any branch. From here, the workflow is a single, unified path. Setup This template is a blueprint. To adapt it: Replace the triggers with your own. Normalize your data: After each trigger, use a Set node to map its unique output to your common schema. Connect to the consolidator: Link all your "normalize" nodes to the Consolidate trigger data node. Build your core logic after the consolidation point, referencing the unified data. Taking it further Merge any branches:** Use this pattern to merge any parallel branches in a workflow, not just triggers. Create robust error handling:** Unify "success" and "error" paths before a final notification step to report on the outcome.
by Avkash Kakdiya
How it works This workflow turns a single planning row in Google Sheets into a fully structured content engine. It generates weighted content pillars, builds a rule-based posting calendar, and then creates publish-ready social posts using AI. The workflow strictly controls format routing, CTA rules, and execution order. All outputs are written back to Google Sheets for easy review and execution. Step-by-step Step 1: Input capture & pillar generation** Google Sheets Trigger – Detects new or updated planning rows. Get row(s) in sheet – Fetches brand, platform, scheduling, and promotion inputs. Message a model – Calculates calendar metrics and generates platform-specific content pillars. Code in JavaScript – Validates AI output and enforces 100% weight distribution. Append row in sheet – Stores finalized content pillars in the pillars sheet. Step 2: Calendar generation & routing** Message a model7 – Generates a full day-by-day content calendar from the pillars. Code in JavaScript7 – Normalizes calendar data into a sheet-compatible structure. Append row in sheet6 – Saves calendar entries with dates, formats, CTAs, and status. Switch By Format – Routes items based on Video vs Non-Video formats. Step 3: Post creation & final storage** Loop Over Items – Processes each calendar entry one at a time. Message a model6 – Creates complete hooks, captions, CTAs, and hashtags. Code in JavaScript6 – Formats AI output for final storage. Append row in sheet7 – Stores publish-ready posts in the final sheet. Wait – Controls pacing to avoid API rate limits. Why use this? Eliminates manual content planning and ideation. Enforces strategic content mix and CTA discipline. Produces platform-ready posts automatically. Keeps all planning, calendars, and content in Google Sheets. Scales content operations without extra overhead.
by Guido X Jansen
AI Council: Multi-Model Consensus with Peer Review Inspired by Andrej Karpathy's LLM Council, but rebuilt in n8n. This workflow creates a "council" of AI models that independently answer your question, then peer-review each other's responses before a final arbiter synthesizes the best answer. Who is this for? If you want to prepare for an upcoming meeting with different people and prep for their different views find any "blind spots" in your view on a certain subject Researchers wanting more robust AI-generated answers Developers exploring multi-model architectures Anyone seeking higher-quality responses through AI consensus, potentially with faster/cheaper models. Teams evaluating different LLM capabilities side-by-side How it works Ask a Question — Submit your query via the Chat Trigger Individual Answers — Four different models (Gemini, Llama, Gemma, Mistral) independently generate responses Peer Review — Each model reviews ALL answers, identifying pros, cons, and overall assessment Final Synthesis — DeepSeek R1 analyzes all peer reviews and produces a refined, consensus-based final answer Setup Instructions Prerequisites Access to an LLM (e.g. OpenRouter account with API credits) Steps Create OpenRouter credentials in n8n: Go to Settings → Credentials → Add Credential Select "OpenRouter" and paste your API key Connect all model nodes to your OpenRouter credential. In this example I used Gemini, Llama, Gemma, Mistral and Deepseek, but you can use whatever you want. You can also use the same models, but change their parameters. Play around to find out what suits you best. Activate the workflow and open the Chat interface to test Customization Ideas You can add as many answer and review models as you want. Do note that each AI node is executed in series, so each will add to the total duration. Swap models via OpenRouter's model selector (e.g., use Claude, GPT-4, etc.) Adjust the peer review prompt to represent a certain persona or with domain-specific evaluation criteria Add memory nodes for multi-turn conversations Connect to Slack/Discord instead of the Chat Trigger
by James Carter
This n8n template generates a dynamic weekly sales report from Airtable and sends it to Slack. It calculates key sales metrics like total pipeline value, weighted pipeline (based on deal stage), top deal, closed revenue, and win rate.. all formatted in a clean Slack message. How it works A schedule trigger starts the workflow (e.g., every Monday). It fetches deal data from Airtable, splits open vs closed deals, calculates all metrics with JavaScript, and formats the output. The message is then sent to Slack using Markdown for readability. How to use Update the Airtable credentials and select your base and table with fields: Deal Name, Value, Status, etc. Set the Slack channel in the final node to your preferred sales or ops channel. Requirements Airtable base with relevant deal data (see field structure) Slack webhook or token for sending messages Customising this workflow You can adapt the logic to other CRMs like Salesforce or HubSpot, add charts, or tweak stage weights. You can also change the schedule or add filters (e.g., by rep or region).
by higashiyama
AI Team Morale Monitor Who’s it for For team leads, HR, and managers who want to monitor the emotional tone and morale of their teams based on message sentiment. How it works Trigger: Runs every Monday at 9 AM. Config: Defines your Teams and Slack channels. Fetch: Gathers messages for the week. AI Analysis: Evaluates tone and stress levels. Aggregate: Computes team sentiment averages. Report: Creates a readable morale summary. Slack Post: Sends report to your workspace. How to set up Connect Microsoft Teams and Slack credentials. Enter your Team and Channel IDs in the Workflow Configuration node. Adjust the schedule if desired. Requirements Microsoft Teams and Slack access. Gemini (or OpenAI) API credentials set in AI nodes. How to customize Modify the AI prompts for different insight depth. Replace Gemini with other LLMs if preferred. Change posting platform or format. Note: This workflow uses only linguistic data — no personal identifiers or private metadata.
by LeeWei
⚙️ Sales Assistant Build: Automate Prospect Research and Personalized Outreach for Sales Calls 🚀 Steps to Connect: Google Sheets Setup Connect your Google account via OAuth2 in the "Review Calls", "Product List", "Testimonials Tool", "Update Sheet", and "Update Sheets 2" nodes. Duplicate the mock Google Sheet (ID: 1u3WMJwYGwZewW1IztY8dfbEf5yBQxVh8oH7LQp4rAk4) to your drive and update the documentId in all Google Sheets nodes to match your copy's ID. Ensure the sheet has tabs for "Meeting Data", "Products", and "Success Stories" populated with your data. Setup time: ~5 minutes. OpenAI API Key Go to OpenAI and generate your API key. Paste this key into the credentials for both "OpenAI Chat Model" and "OpenAI Chat Model1" nodes. Setup time: ~2 minutes. Tavily API Key Sign up at Tavily and get your API key. In the "Tavily" node, replace the placeholder api_key in the JSON body with your key (e.g., "api_key": "your-tavily-key-here"). Setup time: ~3 minutes. How it Works • Triggers on a new sales call booking (manual for testing). • Pulls prospect details from Google Sheets and researches their company, tech stack, and updates using Tavily. • Matches relevant products/solutions from your product list and updates the sheet. • Generates personalized email confirmation (subject + body) and SMS using testimonials for relevance. • Updates the sheet with the outreach content for easy follow-up. Setup takes ~10-15 minutes total. All nodes are pre-configured—edit only the fields above. Detailed notes (e.g., prompt tweaks) are in sticky notes within the workflow.
by David Olusola
🎬 YouTube New Video → Auto-Post Link to Slack This workflow automatically checks your YouTube channel’s RSS feed every 30 minutes and posts a message to Slack when a new video is published. It includes the title, description snippet, publish date, and a direct “Watch Now” button. ⚙️ How It Works Check Every 30 Minutes A Cron node runs on a 30-minute interval. Keeps monitoring the channel RSS feed for updates. Fetch YouTube RSS The HTTP Request node retrieves the channel’s RSS feed. Uses the format: https://www.youtube.com/feeds/videos.xml?channel_id=YOUR_CHANNEL_ID Parse RSS & Check for New Video A Code node extracts video info: Title Link Description Published date Sorts by most recent publish date. Ensures only new videos within last 2 hours are processed (avoids duplicate posts). Format Slack Message Builds a rich Slack message with: Video title Description preview Published date Button: “🎥 Watch Now” Post to Slack Sends the formatted message to your chosen Slack channel (default: #general). Includes custom username/icon for branding. 🛠️ Setup Steps 1. Get YouTube Channel RSS Go to your channel page → View Page Source. Find: channel/UCxxxxxxxxxx (your channel ID). Construct RSS feed: https://www.youtube.com/feeds/videos.xml?channel_id=YOUR_CHANNEL_ID Replace YOUR_CHANNEL_ID_HERE in the HTTP Request node. 2. Connect Slack Create a Slack app at api.slack.com. Add OAuth scopes: chat:write, channels:read. Install to your workspace. In n8n, connect your Slack OAuth credentials. 3. Adjust Timing (Optional) Default = runs every 30 minutes. Modify the Cron node if you want faster or slower checks. 📺 Example Slack Output 🎬 New Video Published! How to Automate Your Business with n8n 📅 Published: Aug 29, 2025 Learn how to connect your apps and automate repetitive tasks using n8n… With a clickable 🎥 Watch Now button linking directly to the video. ⚡ With this workflow, your Slack team is always up to date on new YouTube uploads — no manual link sharing needed.