by Satyam Tripathi
Try It Out! This n8n template demonstrates how to build an autonomous AI news agent using Decodo MCP that automatically finds, scrapes, and delivers fresh industry news to your team via Slack. Use cases are many – automated news monitoring for your industry, competitive intelligence gathering, startup monitoring, regulatory updates, research automation, or daily briefings for your organization. How it works Define your news topics using the Set node – AI, MCP, web scraping, whatever matters to your business. The AI Agent processes those topics using the Gemini Chat Model, determining which tools to use and when. Here's where it gets interesting: Decodo MCP gives your AI agent the tools to search Google, scrape websites, and parse content automatically – all while bypassing geo-restrictions and anti-bot measures. The agent hunts for fresh articles from the last 48 hours, extracts clean data, and returns structured JSON results. Format Results cleans up the AI's messy output and removes duplicates. Your polished news digest gets delivered to Slack with clickable links and summaries. How to use Schedule trigger runs daily at 9 AM – adjust timing or swap for webhook triggers as needed. Customize topics in the Set node to match your industry. Scales effortlessly: add more topics, tweak search criteria, done. Requirements Decodo MCP credentials (free trial available) – grab the Smithery connection URL with keys and paste it straight into your n8n MCP node. Done. Gemini API key for the AI processing – drop it into the Google Gemini Chat Model node and pick whichever Gemini model fits your needs. Slack workspace for delivery – n8n's Slack integration docs have you covered. What the final output looks like Here's what your team receives in Slack every morning: Need help? Join the Discord or email support@decodo.com for questions. Happy Automating!
by Angel Menendez
Analyze Emails for Security Insights Who is this for? This workflow is ideal for IT professionals, security analysts, and organizations looking to enhance their email security practices. It is particularly useful for those who need to analyze Gmail email headers for IP tracking, spoofing detection, and sender reputation assessment. What problem is this workflow solving? Email spoofing and phishing attacks are significant cybersecurity threats. By analyzing email headers, this workflow provides detailed insights into the email's origin, authentication status, and the reputation of the sending IP address. It helps detect potential spoofing attempts and assess the trustworthiness of incoming emails. What this workflow does This n8n workflow automates the process of analyzing email headers received in Gmail. It performs the following key functions: Triggering and Email Header Extraction: It monitors Gmail inboxes for new emails and extracts their headers for analysis. Authentication Analysis: It validates SPF, DKIM, and DMARC authentication results to ensure the email adheres to industry-standard security protocols. IP Analysis: The workflow extracts the originating IP address and evaluates its reputation and geographic details using external APIs. Reputation Scoring: It integrates with IP Quality Score to detect spam activity and assess the sender's reputation. Consolidation and Webhook Response: All results are aggregated into a single JSON response, making it easy to integrate with third-party platforms or tools for further automation. Setup Authenticate Gmail: Configure the Gmail Trigger node with your Gmail account credentials. API Keys (Optional): Obtain an API key for IP Quality Score (https://ipqualityscore.com). Ensure the IP-API endpoint is accessible. This step is optional as ipqualityscore.com will provide a limited number of free lookups each month. See more details here. Activate the Workflow: Ensure the workflow is active to process incoming emails in real-time. How to customize this workflow to your needs Add Alerts:** Use the Gmail - Respond to Webhook node to trigger notifications in Slack, email, or any other communication channel. Integrate with SIEM:** Forward the workflow output to SIEM tools like Splunk or ELK Stack for further analysis. Modify Validation Rules:** Update SPF, DKIM, or DMARC logic in the Set nodes to align with your organization’s security policies. Expand IP Analysis:** Add more APIs or services to enrich IP reputation data, such as VirusTotal or AbuseIPDB. This workflow provides a robust foundation for email security monitoring and can be tailored to fit your organization's unique requirements. With its modular design and integration options, it’s a versatile tool to enhance your cybersecurity operations.
by Muhammad Asadullah
Document Chat Bot with Automated RAG System This workflow creates a conversational assistant that can answer questions based on your Google Drive documents. It automatically processes various file types and uses Retrieval-Augmented Generation (RAG) to provide accurate answers based on your document content. How It Works Monitors Google Drive for New Documents: Automatically detects when files are created or updated in designated folders Processes Multiple File Types: Handles PDFs, Excel spreadsheets, and Google Docs Builds a Knowledge Base: Converts documents into searchable vector embeddings stored in Supabase Provides Chat Interface: Users can ask questions about their documents through a web interface Retrieves Relevant Information: Uses advanced RAG techniques to find and present the most relevant information Setup Steps (Estimated time: 25-30 minutes) API Credentials: Connect your OpenAI API key for text processing and embeddings Google Drive Integration: Set up Google Drive triggers to monitor specific folders Supabase Configuration: Configure Supabase vector database for document storage Chat Interface Setup: Deploy the web-based chat interface using the provided webhook The workflow automatically chunks documents into manageable segments, generates embeddings, and stores them in a vector database for efficient retrieval. When users ask questions, the system finds the most relevant document sections and uses them to generate accurate, contextual responses.
by PUQcloud
Setting up n8n workflow Overview The Docker MinIO WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. Create an SSH Credential for accessing a server with Docker installed. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by Adnan Tariq
🛡 CyberScan – AI-Powered Vulnerability Scanner with Nessus, OpenAI, and Google Sheets 👤 Who’s it for Security teams, DevOps engineers, vulnerability analysts, and automation builders who want to eliminate repetitive Nessus scan parsing, AI-based risk triage, and manual reporting. Designed for orgs following NIST CSF or CISA KEV compliance guidelines. ⚙️ How it works / What it does Runs scheduled or manual scans via the Nessus API. Processes scan results and extracts asset + vulnerability data. Uses a custom AI-based risk metric (LEV) to triage findings into: 🚨 Expert review ✅ Self-healing 🕵️ Monitoring Automatically sends email alerts for critical CVEs. Exports daily summaries to Google Sheets (or your own BI system). Maps to NIST CSF (Identify, Protect, Detect, Respond, Recover). 🧰 How to set up Nessus: Add your Nessus API credentials and instance URL. Google Sheets: Authenticate your Google account. OpenAI / LLM: Use your API key if adding LLM triage or rewrite prompts. Email: Update SMTP credentials and alert recipient address. Set your targets: Adjust asset ranges or scan UUIDs as needed. ⚠️ All setup steps are explained in sticky notes inside the workflow. 📋 Requirements Nessus Essentials (Free) or Nessus Pro with API access. SMTP service (e.g. Gmail, Mailgun, SendGrid). Google Sheets OAuth2 credentials. Optional: OpenAI or other LLM provider for LEV scoring and CVE insights. 🛠 How to customize the workflow Swap Google Sheets with Airtable, Supabase, or PostgreSQL. Change scan logic or asset list to fit your internal network scope. Adjust AI scoring logic to match internal CVSS thresholds or KEV tags. Expand alerting logic to include Slack, Discord, or webhook triggers. 🔒 No sensitive data included. All credentials and sheet links are placeholders.
by Mario
Purpose This workflow synchronizes three entities from Notion to Clockify, allowing tracked time to be linked to client-related projects or tasks. Demo & Explanation How it works On every run active Clients, Projects and Tasks are retrieved from both Notion and Clockify before being compared by the Clockify ID, which is again stored in Notion for reference Potential differences are then applied to Clockify If an item has been archived or closed in Notion, it is also marked as archived in Clockify All entities are processed sequentially, since they are related hierarchically to each other By default this workflow runs once per day or when called via webhook (e.g. embedded into a Notion Button) Prerequisites A set of Notion databases with a specific structure is required to use this workflow You can either start with this Notion Template or adapt your system based on the requirements described in the big yellow sticky note of this workflow template Setup Clone the workflow and select the belonging credentials Follow the instructions given in the yellow sticky notes Activate the workflow Related workflows: Backup Clockify to Github based on monthly reports Prevent simultaneous workflow executions with Redis
by Didac Fernandez
Nova AI Content Marketing Agent - LinkedIn & Facebook Automation This n8n template demonstrates how to create a complete AI-powered social media content creation and scheduling system that generates platform-optimized posts for LinkedIn and Facebook with custom images and human approval workflows. Possible use cases: Generate a full week of social media content from a single brand brief Create platform-specific content that maintains brand voice consistency Automate image generation with AI while maintaining quality control Schedule approved content across multiple social platforms Track and organize all content in centralized spreadsheets How it works The automation starts with a form submission collecting 10 brand variables (name, industry, demographics, etc.) Nova AI Agent analyzes the brand information and generates 6 distinct social media posts (3 LinkedIn professional, 3 Facebook community-focused) Content is split by platform and routed to separate image generation workflows Google Imagen 4 Ultra creates custom visuals for each post with platform-specific aspect ratios Each generated image is sent to Slack for human approval via interactive forms If feedback is provided, NanoBanana AI edits the image based on natural language instructions Approved images are uploaded to Google Drive with organized naming conventions All content data is logged to Google Sheets with image URLs and scheduling information Final posts are scheduled via Late API to respective social platforms The workflow loops through each post individually for quality control Requirements OpenRouter API credentials for GPT-5 Mini access Replicate API key for Google Imagen 4 Ultra and NanoBanana Slack OAuth2 credentials with bot permissions Google Drive OAuth2 credentials Google Sheets API access GetLate API key connected to LinkedIn and Facebook accounts Perplexity API for research enhancement (optional) HOW TO USE STEP 1 - Setup Form and Brand Variables Configure the Form Trigger webhook URL for brand data collection Update the 10 form fields with your specific industry placeholders Test the form submission to ensure data flows correctly STEP 2 - Configure AI Services Add your OpenRouter API credentials to both Chat Model nodes Add your Replicate API key to the HTTP Header Auth credential Configure Perplexity API credentials for research functionality Set up custom session keys for memory management STEP 3 - Setup Approval Workflow Add Slack OAuth2 credentials to both "Send message and wait" nodes Update the Slack channel ID to your preferred approval channel Configure the custom form fields for approval/feedback collection STEP 4 - Configure Storage and Scheduling Add Google Drive OAuth2 credentials and update the target folder ID Add Google Sheets credentials and update the spreadsheet ID Get your Late API key from getlate.dev and add to HTTP Header Auth Update the Late accountId in both Schedule Post nodes with your platform IDs STEP 5 - Customize Content Strategy Modify the Nova system prompt to match your brand voice requirements Adjust the visual style requirements in the AI Agent configuration Update posting date logic and timezone settings as needed Test the complete workflow with sample brand data
by David Ashby
🛠️ Google Cloud Natural Language Tool MCP Server Complete MCP server exposing all Google Cloud Natural Language Tool operations to AI agents. Zero configuration needed - all 1 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Google Cloud Natural Language Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Google Cloud Natural Language Tool tool with full error handling 📋 Available Operations (1 total) Every possible Google Cloud Natural Language Tool operation is included: 🔧 Document (1 operations) • Analyze sentiment 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Google Cloud Natural Language Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Google Cloud Natural Language Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Edisson Garcia
🚀 Message-Batching Buffer Workflow (n8n) This workflow implements a lightweight message-batching buffer using Redis for temporary storage and a JavaScript consolidation function to merge messages. It collects incoming user messages per session, waits for a configurable inactivity window or batch size threshold, consolidates buffered messages via custom code, then clears the buffer and returns the combined response—all without external LLM calls. 🔑 Key Features Redis-backed buffer** queues incoming messages per context_id. Centralized Config Parameters** node to adjust thresholds and timeouts in one place. Dynamic wait time** based on message length (configurable minWords, waitLong, waitShort). Batch trigger** fires on inactivity timeout or when buffer_count ≥ batchThreshold. Zero-cost consolidation** via built-in JavaScript Function (consolidate buffer)—no GPT-4 or external API required. ⚙️ Setup Instructions Extract Session & Message Trigger: When chat message received (webhook) or When clicking ‘Test workflow’ (manual). Map inputs: set variables context_id and message into a Set node named Mock input data (for testing) or a proper mapping node in production. Config Parameters Add a Set node Config Parameters with: minWords: 3 # Word threshold waitLong: 10 # Timeout (s) for long messages waitShort: 20 # Timeout (s) for short messages batchThreshold: 3 # Messages to trigger batch early All downstream nodes reference these JSON values dynamically. Determine Wait Time Node: get wait seconds (Code) JS code: const msg = $json.message || ''; const wordCount = msg.split(/\s+/).filter(w => w).length; const { minWords, waitLong, waitShort } = items[0].json; const waitSeconds = wordCount < minWords ? waitShort : waitLong; return [{ json: { context_id: $json.context_id, message: msg, waitSeconds } }]; Buffer Message in Redis Buffer messages: LPUSH buffer_in:{{$json.context_id}} with payload {text, timestamp}. Set buffer\_count increment: INCR buffer_count:{{$json.context_id}} with TTL {{$json.waitSeconds + 60}}. Set last\_seen: record last_seen:{{$json.context_id}} timestamp with same TTL. Check & Set Waiting Flag Get waiting\_reply: if null, Set waiting\_reply to true with TTL {{$json.waitSeconds}}; else exit. Wait for Inactivity WaitSeconds (webhook): pauses for {{$json.waitSeconds}} seconds before batch evaluation. Check Batch Trigger Get last\_seen and Get buffer\_count. IF (now - last_seen) ≥ waitSeconds * 1000 OR buffer_count ≥ batchThreshold, proceed; else use Wait node to retry. Consolidate Buffer consolidate buffer (Code): const j = items[0].json; const raw = Array.isArray(j.buffer) ? j.buffer : []; const buffer = raw.map(x => { try { return typeof x === 'string' ? JSON.parse(x) : x; } catch { return null; } }).filter(Boolean); buffer.sort((a, b) => new Date(a.timestamp) - new Date(b.timestamp)); const texts = buffer.map(e => e.text?.trim()).filter(Boolean); const unique = [...new Set(texts)]; const message = unique.join(' '); return [{ json: { context_id: j.context_id, message } }]; Cleanup & Respond Delete Redis keys: buffer_in, buffer_count, waiting_reply, last_seen (for the context_id). Return consolidated message to the user via your chat integration. 🛠 Customization Guidance Adjust thresholds* by editing the *Config Parameters** node. Change concatenation** (e.g., line breaks) by modifying the join separator in the consolidation code. Add filters** (e.g., ignore empty or system messages) inside the consolidation Function. Monitor performance**: for very high volume, consider sharding Redis keys by date or user segments. © 2025 Innovatex • Automation & AI Solutions • innovatexiot.carrd.co • LinkedIn
by KlickTipp
Community Node Disclaimer: This workflow uses KlickTipp community nodes. How It Works: Facebook Lead Ads to KlickTipp Integration: This workflow automatically transfers lead information submitted via Facebook Lead Ads into KlickTipp. It is ideal for automating course registrations or similar campaigns, enabling targeted email sequences based on user input. Data Handling: Lead data from Facebook is received via webhook, matched to KlickTipp’s custom fields, and the contact is tagged for segmentation and automation. Key Features Webhook Trigger for Facebook Lead Ads: Captures new lead form submissions from Facebook, including: Name Email address Chosen course Preferred payment method Optional comments Data Mapping & Validation: Maps Facebook field values to pre-defined custom fields in KlickTipp Subscriber Management in KlickTipp: Adds or updates leads as subscribers in KlickTipp Includes mapping to custom fields such as: Facebook_Leads_Ads_Kursauswahl Facebook_Leads_Ads_Zahlungsweise Facebook_Leads_Ads_Kommentar Assigns relevant tags for automated campaign triggers Setup Instructions 1. Prepare KlickTipp Custom Fields: Before using the workflow, create the following custom fields in KlickTipp under → Contacts → Custom fields: | Name | Datentyp | | - | - | | Facebook_Leads_Ads_Kommentar | Zeile | | Facebook_Leads_Ads_Kursauswahl | Zeile | | Facebook_Leads_Ads_Zahlungsweise | Zeile | 2. Facebook Lead Ads Setup: Create a lead form under Facebook Ads Manager Include custom fields for course interest, payment preference, and comments 3. Set Up Facebook Webhook in n8n: Use the Facebook Lead Ads node to create a webhook Authenticate your Facebook account Choose the Page and corresponding lead form Save and activate the webhook 4. Map Data to KlickTipp Fields: Open the KlickTipp node to Authenticate with your credentials (username&password) Map the fields from the Facebook webhook to the according custom fields in KlickTipp. Testing & Deployment Run a Test: Use Meta’s testing tool to generate a test lead Run the n8n workflow once manually Note: Facebook test email (e.g., test@fb.com) is invalid—expect an error in KlickTipp during testing. You can pin the output of the node and manipulate the address to a valid test-address. Workflow Logic Webhook Trigger from Facebook: Initiates workflow upon new lead form submission Add or Update Contact in KlickTipp: Submits mapped data into your KlickTipp account Benefits Automated Lead Management: No manual data transfers needed—new Facebook leads are instantly pushed to KlickTipp. Personalized Campaigns: Segment leads based on selected course or payment method for targeted follow-up emails. Notes: Customization: Adjust field mappings in the KlickTipp node based on your lead form structure. Ensure all required fields (email, opt-in, etc.) are mapped correctly. Resources: Use the Meta Lead Ads Testing Tool to simulate lead submissions during setup. Look into our knowledgebase article Send Facebook Leads to KlickTipp with Make or n8n to learn more. Use KlickTipp Community Node in n8n Automate Workflows: KlickTipp Integration in n8n
by PUQcloud
Setting up n8n workflow Overview The Docker n8n WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. Create an SSH Credential for accessing a server with Docker installed. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by David Ashby
Complete MCP server exposing all Google Translate Tool operations to AI agents. Zero configuration needed - all 1 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Google Translate Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Google Translate Tool tool with full error handling 📋 Available Operations (1 total) Every possible Google Translate Tool operation is included: 🔧 Language (1 operations) • Translate a language 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Google Translate Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Google Translate Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.