by James Carter
This n8n workflow automatically fetches trending news articles based on your chosen country, category, and keyword — then enriches the data with AI-powered business insights before posting a concise summary to Slack. Ideal for sales teams, executives, marketers, or anyone who wants fast, actionable news briefings directly in their Slack workspace. ⸻ Who it’s for Executives, analysts, sales teams, or marketing professionals who want curated, AI-enhanced news summaries tailored to business opportunities, risks, and trends — delivered automatically to Slack. ⸻ How it works / What it does A Schedule Trigger runs on a daily, weekly, or custom frequency. It queries the NewsAPI to retrieve top headlines by country, category, or keyword. Headlines are formatted and enriched with your configured query context. The AI model (GPT-4) analyzes articles and summarizes key insights, categorizing them as Opportunities, Risks, or Trends. Finally, the summarized insights are posted directly into a Slack channel of your choice. ⸻ How to set up Set your schedule frequency in the Schedule Trigger node. Configure your preferred country, category, and keyword in the Inject Config node. Add your NewsAPI Key inside the Fetch News Articles node. Connect your Slack credentials in the Post to Slack node. Optional: Adjust the AI prompt for more tailored analysis. ⸻ Requirements A NewsAPI account to fetch headlines. An OpenAI API key for GPT-4 summarization. A Slack workspace and connected credentials via n8n. ⸻ How to customize the workflow Change the country, category, or keyword in the Inject Config to focus on specific markets or sectors. Adjust the AI prompt in the GPT node to prioritize certain insights like ESG factors, M&A activity, or market sentiment. Extend the workflow to log results to Google Sheets, email summaries, or send SMS alerts. Replace the Schedule Trigger with a Webhook if you want to trigger summaries on demand. This template is designed to be modular, making it easy to adapt for competitive intelligence, investment tracking, or industry news curation.
by Jaruphat J.
Overview This workflow automatically saves files received via LINE Messaging API into Google Drive and logs the file details into a Google Sheet. It checks the file type against allowed types, organizes files into date-based folders and (optionally) file type–specific subfolders, and sends a reply message back to the LINE user with the file URL or an error message if the file type is not permitted. Who is this for? Developers & IT Administrators: Looking to integrate LINE with Google Drive and Sheets for automated file management. Businesses & Marketing Teams: That want to automatically archive media files and documents received from users via LINE. Anyone Interested in No-Code Automation: Users who want to leverage n8n’s capabilities without heavy coding. What Problem Does This Workflow Solve? Automated File Organization: Files received from LINE are automatically checked for allowed file types, then stored in a structured folder hierarchy in Google Drive (by date and/or file type). Data Logging: Each file upload is recorded in a Google Sheet, providing an audit trail with file names, upload dates, URLs, and types. Instant Feedback: Users receive an immediate reply via LINE confirming the file upload, or an error message if the file type is not allowed. What This Workflow Does 1. Receives Incoming Requests: A webhook node ("LINE Webhook Listener") listens for POST requests from LINE, capturing file upload events and associated metadata. 2. Configuration Loading: A Google Sheets node ("Get Config") reads configuration data (e.g., parent folder ID, allowed file types, folder organization settings, and credentials) from a pre-defined sheet. Data Merging & Processing: The "Merge Event and Config Data" and "Process Event and Config Data" nodes merge and structure the event data with configuration settings. A "Determine Folder Info" node calculates folder names based on the configuration. If Store by Date is enabled, it uses the current date (or a specified date) as the folder name. If Store by File Type is also enabled, it uses the file’s type (e.g., image) for a subfolder. 4. Folder Search & Creation: The workflow searches for an existing date folder ("Search Date Folder"). If the date folder is not found, an IF node ("Check Existing Date Folder") routes to a "Create Date Folder" node. Similarly, for file type organization, the workflow uses a "Search FileType Folder" node (with appropriate conditions) to look for a subfolder, or creates it if not found. The "Set Date Folder ID" and "Set Image Folder ID" nodes capture and merge the resulting folder IDs. Finally, the "Config final ParentId" node sets the final target folder ID based on the configuration conditions: Store by Date: TRUE, Store by File Type: TRUE: Use the file type folder (inside the date folder). Store by Date: TRUE, Store by File Type: FALSE: Use the date folder. Store by Date: FALSE, Store by File Type: TRUE: Use the file type folder. Store by Date: FALSE, Store by File Type: FALSE: Use the Parent Folder ID from the configuration. 5. File Retrieval and Validation: A HTTP Request node ("Get File Binary Content") fetches the file’s binary data from the LINE API. A Function node ("Validate File Type") checks if the file’s MIME type is included in the allowed list (e.g., "audio|image|video"). If not, it throws an error that is captured for the reply. 6. File Upload and Logging: The "Upload File to Google Drive" node uploads the validated binary file to the final target folder. After a successful upload, the "Log File Details to Google Sheet" node logs details such as file name, upload date, Google Drive URL, and file type into a designated Google Sheet. 7. User Feedback: The "Check Reply Enabled Flag" node checks if the reply feature is enabled. Finally, the "Send LINE Reply Message" node sends a reply message back to the LINE user with either the file URL (if the upload was successful) or an error message (if the file type was not allowed). Setup Instructions 1. Google Sheets Setup: Create a Google Sheet with two sheets:** config: Include columns for Parent Folder Path, Parent Folder ID, Store by Date (boolean), Store by File Type (boolean), Allow File Types (e.g., “audio|image|video”), CurrentDate, Reply Enabled, and CHANNEL ACCESS TOKEN. fileList: Create headers for File Name, Date Uploaded, Google Drive URL, and File Type. For an example of the required format, check this Google Sheets template: Google Sheet Template 2. Google Drive Credentials: Set up and authorize your Google Drive credentials in n8n. 3. LINE Messaging API: Configure your LINE Developer Console webhook to point to the n8n Webhook URL ("Line Chat Bot" node). Ensure you have the proper Channel Access Token stored in your Google Sheet. 4. n8n Workflow Import: Import the provided JSON file into your n8n instance. Verify node connections and update any credential references as needed. 5. Test the Workflow: Send a test message via LINE to confirm that files are properly validated, uploaded, logged, and that reply messages are sent. How to Customize This Workflow Allowed File Types: Adjust the "Validate File Type" field in your config sheet to control which file types are accepted. Folder Structure: Modify the logic in the "Determine Folder Info" and subsequent folder nodes to change how folders are structured (e.g., use different date formats or add additional categorization). Logging: Update the "Log File Details to Google Sheet" node if you wish to log additional file metadata. Reply Messages: Customize the reply text in the "Send LINE Reply Message" node to include more detailed information or instructions.
by David Olusola
AI-Powered Airtable Contact Manager Overview The AI-Powered Airtable Contact Manager is an intelligent n8n workflow that enables AI assistants to seamlessly manage contact data in Airtable through natural language interactions. Using the Model Context Protocol (MCP), this workflow bridges the gap between conversational AI and structured data management. How It Works This workflow creates a powerful AI-to-database interface that allows users to manage their Airtable contacts through natural language commands. Here's the complete flow: 1. AI Interaction Layer Users interact with an AI assistant using natural language Examples: "Add John Doe to contacts", "Find all contacts assigned to Sarah", "Show me contact details for ID xyz" 2. MCP Server Trigger The AI assistant processes the user's request and identifies the needed operation Sends structured commands to the n8n workflow via the MCP (Model Context Protocol) Acts as the intelligent routing system for all contact operations 3. Airtable Operations The workflow provides four core contact management functions: 🔍 Get Record: Retrieves specific contact details using a Record ID Input: Record ID from AI Output: Complete contact information (Name, Email, Assignee, Status) ➕ Create Record: Adds new contacts to the database Input: Contact details (Name, Email, Assignee) Output: New record with auto-generated ID and default status 🗑️ Delete Record: Removes contacts permanently Input: Record ID to delete Output: Confirmation of successful deletion 🔎 Search Records: Finds contacts using flexible criteria Input: Airtable formula for filtering Output: All matching contact records 4. Smart Data Handling The workflow uses AI-powered field mapping with $fromAI() functions Automatically extracts relevant information from natural language requests Maintains data integrity with proper field validation Setup Steps Prerequisites n8n instance (cloud or self-hosted) Airtable account with API access MCP-compatible AI system Step 1: Airtable Preparation Create Airtable Base: Set up a new base or use existing one Note your Base ID (starts with app) Set Up Contact Table: Create a table with these fields: Name (Single line text) email (Email) Assignee (Single line text) Status (Single select: Todo, In progress, Done) Note your Table ID (starts with tbl) Generate API Token: Go to https://airtable.com/developers/web/api/introduction Create a personal access token with full permissions Save this token securely Step 2: n8n Configuration Import Workflow: Copy the enhanced JSON workflow Import into your n8n instance Configure Airtable Credentials: Go to Credentials in n8n Create new "Airtable Personal Access Token" credential Enter your Airtable API token Name it "full access" (or update credential references in workflow) Update Base and Table IDs: Replace YOUR_AIRTABLE_BASE_ID with your actual Base ID (starts with app) Replace YOUR_AIRTABLE_TABLE_ID with your actual Table ID (starts with tbl) Update in all four Airtable nodes Update Credential References: Replace your-airtable-credential-id with your actual credential ID Or rename your credential to match "Airtable API Token" Step 3: MCP Integration Configure MCP Server: Set up your MCP server to communicate with n8n Replace your-webhook-path-here and your-webhook-id-here with your actual webhook details Configure your AI system to use this workflow Update Node IDs (Optional): The workflow uses placeholder node IDs for privacy n8n will auto-generate new IDs when you import No action needed unless you're referencing specific nodes Test the Integration: Activate the workflow in n8n Test each operation through your AI interface Verify data flows correctly between AI and Airtable Step 4: Customization (Optional) Add More Fields: Extend the Airtable schema as needed Update the Create Record node field mappings Modify the Search Record filters Enhanced Error Handling: Add error handling nodes Set up notifications for failed operations Implement retry logic for reliability Usage Examples Once set up, users can interact with the system naturally: Creating Contacts: "Add Sarah Johnson with email sarah@company.com, assign to Mike" "Create a new contact for David Wilson, email david@startup.io" Finding Contacts: "Show me all contacts assigned to Jennifer" "Find contacts with status 'In progress'" "Search for contacts with gmail addresses" Managing Records: "Get details for contact rec123ABC" "Delete the contact with ID rec456DEF" "Update John's status to Done" Benefits Natural Language Interface**: No technical knowledge required Automated Data Entry**: Reduces manual work and errors Flexible Searching**: Find contacts using any criteria AI-Powered**: Leverages advanced language understanding Scalable**: Easily extend with more operations Integrated**: Works seamlessly with existing Airtable workflows Technical Notes Uses n8n's $fromAI() function for intelligent data extraction Implements MCP for standardized AI-to-automation communication Supports Airtable's formula syntax for complex searches Maintains security through proper credential management Designed for high reliability with error handling capabilities This workflow transforms contact management from a manual, time-consuming task into an effortless, conversational experience powered by AI.
by n8n Team
This workflow sends a message to a Discord channel when a new row is added or a row is updated in a Google Sheet. The message will send all data rows in the Google Sheet. Prerequisites Discord account and Discord credentials. Google account and Google credentials. How it works Using a code node, we can use the obtained Google Sheet data to create a custom message that will be sent to Discord. The message will be sent to the Discord channel specified in the Discord node. Setup This workflow requires that you set up a Discord webhook and have an existing Google Sheet with data. See how to set up a Discord webhook here.
by Mark Shcherbakov
Video Guide I prepared a detailed guide explaining how to build an AI-powered meeting assistant that provides real-time transcription and insights during virtual meetings. Youtube Link Who is this for? This workflow is ideal for business professionals, project managers, and team leaders who require effective transcription of meetings for improved documentation and note-taking. It's particularly beneficial for those who conduct frequent virtual meetings across various platforms like Zoom and Google Meet. What problem does this workflow solve? Transcribing meetings manually can be tedious and prone to error. This workflow automates the transcription process in real-time, ensuring that key discussions and decisions are accurately captured and easily accessible for later review, thus enhancing productivity and clarity in communications. What this workflow does The workflow employs an AI-powered assistant to join virtual meetings and capture discussions through real-time transcription. Key functionalities include: Automatic joining of meetings on platforms like Zoom, Google Meet, and others with the ability to provide real-time transcription. Integration with transcription APIs (e.g., AssemblyAI) to deliver seamless and accurate capture of dialogue. Structuring and storing transcriptions efficiently in a database for easy retrieval and analysis. Real-Time Transcription: The assistant captures audio during meetings and transcribes it in real-time, allowing participants to focus on discussions. Keyword Recognition: Key phrases can trigger specific actions, such as noting important points or making prompts to the assistant. Structured Data Management: The assistant maintains a database of transcriptions linked to meeting details for organized storage and quick access later. Setup Preparation Create Recall.ai API key Setup Supabase account and table create table public.data ( id uuid not null default gen_random_uuid (), date_created timestamp with time zone not null default (now() at time zone 'utc'::text), input jsonb null, output jsonb null, constraint data_pkey primary key (id), ) tablespace pg_default; Create OpenAI API key Development Bot Creation: Use a node to create the bot that will join meetings. Provide the meeting URL and set transcription options within the API request. Authentication: Configure authentication settings via a Bearer token for interacting with your transcription service. Webhook Setup: Create a webhook to receive real-time transcription updates, ensuring timely data capture during meetings. Join Meeting: Set the bot to join the specified meeting and actively listen to capture conversations. Transcription Handling: Combine transcription fragments into cohesive sentences and manage dialog arrays for coherence. Trigger Actions on Keywords: Set up keyword recognition that can initiate requests to the OpenAI API for additional interactions based on captured dialogue. Output and Summary Generation: Produce insights and summary notes from the transcriptions that can be stored back into the database for future reference.
by Geoffrey Saxena
👤 Who is this for? This workflow is great for n8n users who want to prevent duplicate or overlapping workflow runs. If you're a developer, DevOps engineer, or automation enthusiast managing tasks like database updates, syncing tools, or hitting rate-limited APIs, this one’s for you. 🧩 What problem does this solve? In the real world, automations can get triggered at the same time—whether that’s because of multiple webhook calls, overlapping schedules, or retries. And when two workflows try to do the same thing at once (like updating a record or syncing data), it can cause conflicts, data corruption, or wasted API calls. This workflow helps avoid that problem by using Redis as a lock system, so only one instance runs at a time. Think of it like putting up a “🚧 Workflow in Progress” sign while your logic is running. ⚙️ What this workflow does When the workflow starts, it tries to set a Redis key as a lock with a short expiry. If the lock is free: Your main business logic runs. Once it's done, the lock is cleared. If the lock is already taken (i.e., another run is in progress): The workflow will wait and retry a few times. If a duplicate request shows up while one is already being processed: It skips that duplicate to avoid unnecessary work. You can customize both the timeout and retry logic to match your needs. 🛠️ Setup guide To use this template: You’ll need access to a Redis instance (either self-hosted or managed like Upstash, Redis Cloud, etc). Set up your Redis credentials in the n8n Redis node. Swap out the webhook node with your actual trigger or logic. Adjust the lock timeout to match how long your task typically takes. > 💡 Bonus Tip: Use this pattern wherever you need idempotency or want to avoid duplicate processing. 🧪 Example use case Let’s say you have a workflow that syncs ClickUp tickets to Google Sheets. It runs daily at 9 AM and updates tickets, adds notes, and makes sure nothing is missed. But what if two runs start at the same time? Or someone triggers a manual sync while the scheduled one is still working? By wrapping that whole sync inside this Redis locking template, you can make sure it only runs one at a time, saving your APIs (and your sanity).
by Karam Ghazzi
Description 📄 Turn your Slack workspace into a smart AI-powered HelpDesk using this workflow. This automation listens to Slack messages and uses an AI assistant (powered by OpenAI or any other LLM) to respond to employee questions about HR, IT, or internal policies by referencing your internal documentation (such as the Policy Handbook). If the answer isn't available, it can optionally email the relevant department (HR or IT) and ask them to update the handbook. It remembers recent messages per user, cleans up intermediate responses to keep Slack threads tidy, and ensures your team gets consistent and helpful answers—without manually searching docs or escalating simple questions. Perfect for growing teams who want to streamline internal support using n8n, Slack, and AI. How it works 🛠️ This workflow turns n8n into a Slack-based HelpDesk assistant powered by AI. It listens to Slack messages using the Events API, detects whether a real user is asking a question, and responds using OpenAI (or another LLM of your choice). Here's how it works step-by-step: Webhook Trigger: The workflow starts when a message is posted in Slack via the Events API. It filters out any messages from bots to avoid loops. Identify the User: It fetches the full Slack profile of the user who posted the message and stores their name. Send Receipt Message: An initial message is sent to the user saying, “I’m on it!”, confirming their request is being processed. AI Response Handling: The message is processed using the OpenAI Chat model (GPT-4o by default). Before responding, it checks if the query matches any HR or IT policy from the Policy Handbook. If the question can’t be answered based on internal data, it can optionally alert the HR or IT department via Gmail (after user confirmation). Memory Retention: It keeps track of the last 5 interactions per user using Simple Memory, so it remembers previous context in a Slack conversation. Cleanup and Final Reply: It deletes the initial receipt message and sends a final, clean response to the user. How to use 🚀 Clone the Workflow: Download or import the JSON workflow into your n8n instance. Connect Your Credentials: Slack API (for messaging) Google Sheets API (for department contact info) Google Docs API (for the Policy Handbook) Gmail API (optional, for notifying departments) OpenAI or another AI model Slack Setup: Set up a Slack App and enable the Events API. Subscribe to message events and point them to the Webhook URL generated by the workflow. Customize Responses: Edit the initial and final Slack message nodes if you want to personalize the wording. Swap out the LLM (ChatGPT) with your preferred model in the AI Agent node. Adjust AI Behavior: Tune the prompt logic in the “AI Agent” node if you want the AI to behave differently or access different data sources. Expand Memory or Integrations: Use external databases to store longer histories. Integrate with tools like Asana, Notion, or CRM platforms for further automation. Requirements 📋 n8n (self-hosted or cloud) Slack Developer Account & App OpenAI (or any LLM provider) Google Sheets with department contact details Google Docs containing the policy Handbook Gmail account (optional, for email alerts) Knowledge of Slack Events API setup
by Ranjan Dailata
Who this is for? The LinkedIn Profile Extract and JSON Resume Builder is a powerful workflow that scrapes professional profile data from LinkedIn using Bright Data's infrastructure, then transforms that data into a clean, structured JSON resume using Google Gemini. The workflow is ideal for automating resume parsing, candidate profiling, or integrating into recruiting platforms. This workflow is tailored for: HR professionals & recruiters automating resume screening Talent acquisition platforms enriching candidate profiles Developers & AI builders creating resume-parsing AI pipelines Data scientists working on labor market analytics Growth hackers profiling prospects via public data What problem is this workflow solving? Parsing resumes or LinkedIn profiles into machine-readable formats is often a manual, error-prone process. Most scraping tools either fail due to anti-bot protections or return unstructured HTML that's hard to work with. This workflow solves that by: Using Bright Data's Web Unlocker for reliable, CAPTCHA-free LinkedIn scraping Extracting clean text and structured profile data via Google Gemini LLM Automatically generating a standards-compliant JSON Resume and Skills Sending the resume to webhooks or storing it for downstream usage What this workflow does Accepts LinkedIn Profile URL and required metadata (Bright Data zone, webhook) Scrapes LinkedIn profile using Bright Data Web Unlocker Extracts clean content and skills using Google Gemini LLM Builds a JSON-formatted resume following the JSON resume schema Sends the JSON resume via Webhook Notification Persists the output by saving the file to disk Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Set URL and Bright Data Zone node with the LinkedIn profile, Bright Data Zone and the Webhook notification URL. For testing purposes, you can obtain a webhook url using https://webhook.site/ How to customize this workflow to your needs Add Language Translation Insert a translation LLM node to support multilingual profiles. Generate PDF Resumes Convert JSON to formatted PDF resumes using an HTML-to-PDF module. Push to ATS or CRM Add integration nodes to pipe data into applicant tracking systems (ATS), CRMs, or databases. Use Alternative LLMs Swap Gemini with OpenAI or Anthropic Claude if preferred.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Legal Case Research Extractor is a powerful automated workflow designed for legal tech teams, researchers, law firms, and data scientists focused on transforming unstructured legal case data into actionable, structured insights. This workflow is tailored for: Legal Researchers automating case law data mining Litigation Support Teams handling large volumes of case records LawTech Startups building AI-powered legal research assistants Compliance Analysts extracting case-specific insights AI Developers working on legal NLP, summarization, and search engines What problem is this workflow solving? Legal case data is often locked in semi-structured or raw HTML formats, scattered across jurisdiction-specific websites. Manually extracting and processing this data is tedious and inefficient. This workflow automates: Extraction of legal case data via Bright Data's powerful MCP infrastructure Parsing of HTML into clean, readable text using Google Gemini LLM Structuring and delivering the output through webhook and file storage What this workflow does Input Set the Legal Case Research URL node is responsible for setting the legal case URL for the data extraction. Bright Data MCP Data Extractor Bright Data MCP Client For Legal Case Research node is responsible for the legal case extraction via the Bright Data MCP tool - scrape_as_html Case Extractor Google Gemini based Case Extractor is responsible for producing a paginated list of cases Loop through Legal Case URLs Receives a collection of legal case links to process Each URL represents a different case from a target legal website Bright Data MCP Scraping Utilizes Bright Data’s scrape_as_html MCP mode Retrieves raw HTML content of each legal case Google Gemini LLM Extraction Transforms raw HTML into clean, structured text Performs additional information extraction if required (e.g., case summary, court, jurisdiction etc.) Webhook Notification Sends extracted legal case content to a configurable webhook URL Enables downstream processing or storage in legal databases Binary Conversion & File Persistence Converts the structured text to binary format Saves the final response to disk for archival or further processing Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Target New Legal Portals Modify the legal case input URLs to scrape from different state or federal case databases Customize LLM Extraction Modify the prompt to extract specific fields: case number, plaintiff, case summary, outcome, legal precedents etc. Add a summarization step if needed Enhance Loop Handling Integrate with a Google Sheet or API to dynamically fetch case URLs Add error handling logic to skip failed cases and log them Improve Security & Compliance Redact sensitive information before sending via webhook Store processed case data in encrypted cloud storage Output Formats Save as PDF, JSON, or Markdown Enable output to cloud storage (S3, Google Drive) or legal document management systems
by Dr. Firas
AI-powered WhatsApp booking system with instant SMS confirmations Who is this for? This workflow is designed for solo entrepreneurs, consultants, coaches, clinics, or any business that handles client appointments and wants to automate the entire scheduling experience via WhatsApp — without the need for live agents. What problem is this workflow solving? Responding to inbound messages, collecting booking details, suggesting available times, and sending reminders can be a huge time drain. This workflow eliminates manual handling by: Automating WhatsApp conversations with an AI assistant Booking appointments directly into Cal.com Sending timely SMS reminders before appointments It ensures you never miss a lead or a follow-up — even while you sleep. What this workflow does From a single WhatsApp message, the workflow: Triggers via a WhatsApp webhook Uses GPT-4 to handle conversation flow and qualify the prospect Collects name, email, selected service Calls Cal.com API to fetch available time slots Books the appointment and stores it in Google Sheets Sends a confirmation message via WhatsApp Periodically scans for upcoming appointments Sends SMS reminders to clients 2 hours before their session Setup Connect your Webhook node to a WhatsApp API (e.g., 360dialog, Twilio, or Ultramsg) Add your OpenAI API key for the GPT-4 nodes Configure your Cal.com API key and set your calendar ID Link your Google Sheets with fields like: name, email, date, time, status, reminder_sent Connect your SMS service (e.g., sms77) with API credentials Adjust the schedule in the reminder node as needed How to customize this workflow to your needs Change the language or tone of the AI assistant** by editing the system prompt in the GPT node Filter available time slots** by service, team member, or duration Modify the reminder timing** (e.g., 1 hour before, 24h before, etc.) Add conditional logic** to route users to different booking flows based on their responses Integrate additional CRMs** or notification channels like email or Slack 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube
by Tom Cao
🔐 Advanced SSL Health Monitor 👤 Who is this for? This workflow is designed for DevOps engineers, IT administrators, and security professionals who need comprehensive SSL certificate monitoring and health assessment across multiple domains — featuring dual verification and professional reporting without relying on expensive monitoring services. 🧩 What It Does Daily Trigger runs the workflow every morning for proactive monitoring. URL Collection fetches the list of website URLs to monitor from your data source. Dual SSL Analysis: Free SSL Assessment Script — Get from sysadmin-toolkit on Github SSL-Checker.io API — External verification for cross-validation Comprehensive Health Check: Certificate expiration monitoring (customizable threshold) SSL configuration security assessment Protocol support analysis (TLS 1.3, 1.2, deprecated protocols) Cipher suite strength evaluation Vulnerability scanning (POODLE, BEAST, etc.) Compliance checking (PCI DSS, NIST, FIPS) Smart Alert System sends Discord notifications when: Certificates expire within threshold (default: 30 days) SSL configuration issues detected (weak ciphers, deprecated protocols) Security vulnerabilities found Compliance standards not met Grade drops below acceptable level (configurable) 🎯 Key Features 🔄 Dual Verification**: Cross-checks results between internal scanner and external API 📊 SSL Labs-Style Grading**: A+ to F rating system with detailed analysis 🛡️ Security Assessment**: Vulnerability detection and compliance checking 📱 Discord Integration**: Rich embed notifications with color-coded alerts ⚙️ Setup Instructions Data Source: Configure your URL source from Notion Ensure it contains a URL column with domains to monitor Credentials: Set up Discord webhook for alert notifications Configure any required API credentials for data sources Customize Thresholds: Expiration Alert: Days before expiry (default: 30 days) Grade Threshold: Minimum acceptable SSL grade (default: B) Alert Severity: Choose which issues trigger notifications Advanced Configuration: Modify vulnerability checks based on your security requirements Adjust compliance standards for your industry needs Customize Discord message formatting and alert channels 🧠 Technical Notes Dual-Check Reliability**: Combines custom Bubobot scanner with ssl-checker.io for maximum accuracy No Vendor Lock-in**: Uses free public APIs and open-source tools Professional Reporting**: Generates SSL Labs-quality assessments Security-First Approach**: Comprehensive vulnerability and compliance checking Flexible Alerting**: Discord integration with rich formatting and conditional logic This workflow provide a comprehensive SSL security monitoring solution that rivals enterprise-grade tools while remaining completely open-source and free.
by Sobek
📝 DESCRIPTION OF THE WORKFLOW This workflow connects Salesforce and Geotab to streamline fleet tracking for field service jobs (Work Orders). When a new Work Order is created in Salesforce (with a 'New' status and valid coordinates), it creates a circular geofence zone in Geotab and updates the Work Order with the zone ID. If geolocation is missing, an alert email is sent to dedicated email. The workflow uses a Salesforce Outbound Message to trigger an n8n webhook. It includes robust credential handling and optional logic to skip or notify on bad data. Use Cases: Automating vehicle geofence setup for service visits Enhancing CRM-to-fleet system synchronisation Enforcing work orders data quality via alerts Integrations Used: Salesforce Geotab API Microsoft Outlook (or any SMTP-compatible service) Tags: geotab, salesforce, fleet management, gps tracking, field service, crm, automation, webhook, integration ADDITIONAL RESOURCES 🔗 Salesforce Salesforce Login \[Salesforce Setup (Admin Console)]\(https://login.salesforce.com/ → click “Setup” gear icon) Outbound Messages Documentation Salesforce Developer Documentation Salesforce Workbench (API Testing Tool) 🔗 Geotab Geotab Login (MyGeotab) Geotab Developer Portal Geotab API Explorer Geotab SDK (JavaScript Samples) Geotab Support Centre