by Jimleuk
This n8n template is designed to assist and improve customer support team member capacity by automating the resolution of long-lived and forgotten JIRA issues. How it works Schedule Trigger runs daily to check for long-lived unresolved issues and imports them into the workflow. Each Issue is handled as a separate subworkflow by using an execute workflow node. This allows parallel processing. A report is generated from the issue using its comment history allowing the issue to be classified by AI - determining the state and progress of the issue. If determined to be resolved, sentiment analysis is performed to track customer satisfaction. If negative, a slack message is sent to escalate, otherwise the issue is closed automatically. If no response has been initiated, an AI agent will attempt to search and resolve the issue itself using similar resolved issues or from the notion database. If a solution is found, it is posted to the issue and closed. If the issue is blocked and waiting for responses, then a reminder message is added. How to use This template searches for JIRA issues which are older than 7 days which are not in the "Done" status. Ensure there are some issues that meet this criteria otherwise adjust the search query to suit. Works best if you frequently have long-lived issues that need resolving. Ensure the notion tool is configured as to not read documents you didn't intend it to ie. private and/or internal documentation. Requirements JIRA for issues management OpenAI for LLM Slack for notifications Customising this workflow Why not try classifying issues as they are created? One use-case may be for quality control such as ensuring reporting criteria is adhered to, summarising and rephrasing issue for easier reading or adjusting priority.
by Darryn Balanco
This workflow automates the management of DigitalOcean Droplet snapshots by listing all droplets, filtering based on the number of snapshots, and deleting excess snapshots before creating new ones. It ensures your droplet snapshots stay organized and within a manageable limit, preventing unnecessary storage costs due to an excess of snapshots. Who is this for? This workflow is perfect for users managing DigitalOcean Droplets and looking to automate the process of snapshot creation and cleanup to save on storage costs and maintain efficient resource management. It’s useful for DevOps teams, cloud administrators, or any developer leveraging DigitalOcean for their infrastructure. What problem is this workflow solving? When managing multiple DigitalOcean Droplets, snapshots can quickly accumulate, taking up space and increasing storage costs. Manually deleting and creating snapshots can be time-consuming and inefficient. This automation solves this problem by automating the snapshot management process, ensuring that no more than a defined number of snapshots are kept per droplet. What this workflow does Runs every 48 hours: The workflow is triggered by a cron node that runs every 48 hours, ensuring timely snapshot management. List all droplets: The workflow retrieves all droplets in the DigitalOcean account. Retrieve snapshots: For each droplet, the workflow retrieves a list of existing snapshots. Filter snapshots: If the number of snapshots exceeds 4, the workflow filters for snapshots that need to be deleted. Delete snapshots: Excess snapshots are automatically deleted based on the filter criteria. Create new snapshot: After cleaning up, the workflow creates a new snapshot for each droplet, ensuring that backups are always up-to-date. Setup DigitalOcean API Key: You’ll need to configure the HTTP Request nodes with your DigitalOcean API key. This key is required for authenticating requests to list droplets, retrieve snapshots, delete snapshots, and create new ones. Snapshot Threshold: By default, the workflow is set to keep no more than 4 snapshots per droplet. This can be adjusted by modifying the filter node conditions. Set Execution Frequency: The cron node is set to run every 48 hours, but you can adjust the timing to suit your needs. How to customize this workflow Adjust Snapshot Limit**: Change the value in the filter node if you want to keep more or fewer snapshots. Modify Run Frequency**: The workflow runs every 48 hours by default. You can change the frequency in the cron node to run more or less often. Enhance with Notifications**: You can add a notification node (e.g., Slack or email) to alert you when snapshots are deleted or created. Workflow Summary This workflow automates the management of DigitalOcean Droplet snapshots by keeping the number of snapshots under a defined limit, deleting the oldest ones, and ensuring new snapshots are created at regular intervals.
by Darryn Balanco
This workflow automates the process of gathering LinkedIn advice articles, extracting their content, and generating unique contributions for each article using an AI model. The contributions are then posted to a Slack channel and a NocoDB database for record-keeping. The workflow is triggered weekly to ensure new articles are continuously collected and responded to. Who is this for? This workflow is designed for professionals, marketers, and content creators looking to boost their LinkedIn presence by regularly engaging with LinkedIn advice articles. It’s especially useful for those who want to be seen as a "thought leader" or "top voice" in their niche by contributing relevant and unique advice to trending topics. What problem is this workflow solving? Manually searching for relevant LinkedIn articles, reading through them, and crafting thoughtful contributions can be time-consuming. This workflow solves that by automating the process of finding new articles, extracting key content, and generating AI-powered contributions. It helps users stay consistently active on LinkedIn, contributing value to trending discussions. What this workflow does Triggers Weekly: The workflow is set to run every Monday at 8:00 AM. Search Google for LinkedIn Advice Articles: Uses a predefined Google search URL to find the latest LinkedIn advice articles based on the user's area of expertise. Extract LinkedIn Article Links: A code node extracts all LinkedIn advice article links from the search results. Retrieve Article Content: For each article link, the workflow retrieves the HTML content and extracts the article title, topics, and existing contributions. Generate AI-Powered Contributions: The workflow sends the extracted article content to an AI model, which generates unique, helpful advice for each topic within the article. Post to Slack & NocoDB: The AI-generated contributions, along with the article links, are posted to a designated Slack channel and stored in a NocoDB database for future reference. Setup Google Search URL: Update the Google search URL with the relevant LinkedIn advice query for your field (e.g., "site:linkedin.com/advice 'marketing automation'"). Slack Integration: Connect your Slack account and specify the Slack channel where you want the contributions to be posted. NocoDB Integration: Set up your NocoDB project to store the generated contributions along with the article titles and links. How to customize this workflow Change Search Terms**: Modify the Google search URL to focus on a different LinkedIn topic or expertise area. Adjust Trigger Frequency**: The workflow is set to run weekly, but you can adjust the frequency by changing the schedule trigger. Enhance Contribution Quality**: Customize the AI model's prompt to generate contributions that align with your brand voice or content strategy. Workflow Summary This workflow helps users maintain a consistent presence on LinkedIn by automating the discovery of new advice articles and generating unique contributions using AI. It is ideal for professionals who want to engage with LinkedIn content regularly without spending too much time manually searching and drafting responses.
by Jez
🎯 Overview This n8n workflow automates the process of ingesting documents from multiple sources (Google Drive and web forms) into a Qdrant vector database for semantic search capabilities. It handles batch processing, document analysis, embedding generation, and vector storage - all while maintaining proper error handling and execution tracking. 🚀 Key Features Dual Input Sources**: Accepts files from both Google Drive folders and web form uploads Batch Processing**: Processes files one at a time to prevent memory issues and ensure reliability AI-Powered Analysis**: Uses Google Gemini to extract metadata and understand document context Vector Embeddings**: Generates OpenAI embeddings for semantic search capabilities Automated Cleanup**: Optionally deletes processed files from Google Drive (configurable) Loop Processing**: Handles multiple files efficiently with Split In Batches nodes Interactive Chat Interface**: Built-in chatbot for testing semantic search queries against indexed documents 📋 Use Cases Knowledge Base Creation**: Build searchable document repositories for organizations Document Compliance**: Process and index legal/regulatory documents (like Fair Work documents) Content Management**: Automatically categorize and store uploaded documents Research Libraries**: Create semantic search capabilities for research papers or reports Customer Support**: Enable instant answers to policy and documentation questions via chat interface 🔧 Workflow Components Input Methods Google Drive Integration Monitors a specific folder for new files Processes existing files in batch mode Supports automatic file conversion to PDF Web Form Upload Public-facing form for document submission Accepts PDF, DOCX, DOC, and CSV files Processes multiple file uploads in a single submission Processing Pipeline File Splitting: Separates multiple uploads into individual items Document Analysis: Google Gemini extracts document understanding Text Extraction: Converts documents to plain text Embedding Generation: Creates vector embeddings via OpenAI Vector Storage: Inserts documents with embeddings into Qdrant Loop Control: Manages batch processing with proper state handling Key Nodes Split In Batches**: Processes files one at a time with reset: false to maintain state Google Gemini**: Analyzes documents for context and metadata Langchain Vector Store**: Handles Qdrant insertion with embeddings HTTP Request**: Direct API calls for custom operations Chat Interface**: Interactive chatbot for testing vector search queries 🛠️ Technical Implementation Batch Processing Logic The workflow uses a clever looping mechanism: Split In Batches with batchSize: 1 ensures single-file processing reset: false maintains loop state across iterations Loop continues until all files are processed Error Handling All nodes include continueOnFail options where appropriate Execution logs are preserved for debugging File deletion only occurs after successful insertion Data Flow Form Upload → Split Files → Batch Loop → Analyze → Insert → Loop Back Google Drive → List Files → Batch Loop → Download → Analyze → Insert → Delete → Loop Back 📊 Performance Considerations Processing Time**: ~20-30 seconds per file Batch Size**: Set to 1 for reliability (configurable) Memory Usage**: Optimized for files under 10MB API Costs**: Uses OpenAI embeddings (text-embedding-3-large model) 🔐 Required Credentials Google Drive OAuth2: For file access and management OpenAI API: For embedding generation Qdrant API: For vector database operations Google Gemini API: For document analysis 💡 Implementation Tips Start Small: Test with a few files before processing large batches Monitor Costs: Track OpenAI API usage for embedding generation Backup First: Consider archiving instead of deleting processed files Check Collections: Ensure Qdrant collection exists before running 🎨 Customization Options Change Embedding Model**: Switch to text-embedding-3-small for cost savings Adjust Chunk Size**: Modify text splitting parameters for different document types Add Metadata**: Extend the Gemini prompt to extract specific fields Archive vs Delete**: Replace delete operation with move to "processed" folder 📈 Real-World Application This workflow was developed to process business documents and legal agreements, making them searchable through semantic queries. It's particularly useful for organizations dealing with large volumes of regulatory documentation that need to be quickly accessible and searchable. Chat Interface Testing The integrated chatbot interface allows users to: Query processed documents using natural language Test semantic search capabilities in real-time Verify document indexing and retrieval accuracy Ask questions about specific topics (e.g., "What are the pay rates for junior employees?") Get instant AI-powered responses based on the indexed content 🌟 Benefits Automation**: Eliminates manual document processing Scalability**: Handles individual files or bulk uploads Intelligence**: AI-powered understanding of document content Flexibility**: Multiple input sources and processing options Reliability**: Robust error handling and state management 👨💻 About the Creator Jeremy Dawes is the CEO of Jezweb, specializing in AI and automation deployment solutions. This workflow represents practical, production-ready automation that solves real business challenges while maintaining simplicity and reliability. 📝 Notes The workflow intelligently handles the n8n form upload pattern where multiple files create a single item with multiple binary properties (Files_0, Files_1, etc.) The Split In Batches pattern with reset: false is crucial for proper loop execution Direct API integration provides more control than pure Langchain implementations 🔗 Resources Qdrant Documentation OpenAI Embeddings n8n Documentation Jezweb - AI & Automation Solutions This workflow demonstrates practical automation that bridges document management with modern AI capabilities, creating intelligent document processing systems that scale with your needs.
by Harsh Maniya
🤖 Universal E-Commerce AI Assistant (Shopify, WooCommerce & RAG) This powerful n8n workflow deploys a sophisticated, multi-talented AI chatbot designed to streamline your e-commerce and customer support operations. The AI assistant can intelligently understand user queries and route them to the correct specialized agent, whether it's for Shopify, WooCommerce, or general knowledge questions answered by a Retrieval-Augmented Generation (RAG) system. This template automates responses to a wide range of inquiries, from checking Shopify order statuses with GraphQL to fetching product lists from WooCommerce, and even answering general questions by looking up information in a Pinecone vector database. How It Works ⚙️ The workflow operates in a series of logical steps, starting from the moment a user sends a message. 💬 Chat Trigger: The workflow activates when a user sends a message in the n8n chat interface. It captures the user's input and a unique session ID to track the conversation. 🧠 Intelligent Routing: The user's query is first sent to a Router Agent powered by GPT-4o-mini. This agent's sole purpose is to classify the intent of the message and output one of three keywords: SHOPIFY, WOOCOMMERCE, or None of them. 🔀 Conditional Branching: Based on the Router's output, a series of IF nodes direct the conversation down one of three paths: General Queries Path Shopify Path WooCommerce Path 📚 General Queries (RAG): If the query is not about e-commerce, it's handled by a RAG agent. Embedding: The user's question is converted into a vector embedding using AWS Bedrock. Retrieval: The workflow searches a Pinecone Vector Store to find the most relevant information from your knowledge base. Generation: A GPT-4o-mini agent receives the context from Pinecone and generates a comprehensive, helpful answer. 🛍️ E-Commerce Specialists: If the query is about Shopify or WooCommerce, it's passed to a dedicated agent. Shopify Agent: This agent uses Google Gemini and has a suite of tools to manage Shopify tasks. It can Get Order info, Fetch All Products, or run complex queries using the powerful GraphQL tool. WooCommerce Agent: This agent also uses Google Gemini and is equipped with tools to Fetch Order Details and Fetch All Products from a WooCommerce store. 🗣️ Conversation Memory: Each agent (Router, General, Shopify, WooCommerce) is connected to its own Memory node. This allows the chatbot to remember previous parts of the conversation for a more natural and context-aware interaction. 🏁 Merge & Respond: All three paths converge at a final Merge node. This ensures that no matter which agent handled the request, the final answer is streamlined into a single output and sent back to the user in the chat. Nodes Used 🔗 Triggers: Chat Trigger: Starts the workflow when a chat message is received. AI & Agents: AI Agent: Four separate agents for Routing, Shopify, WooCommerce, and General Queries. OpenAI Chat Model: Uses GPT-4o-mini for the Router and General Queries agent. Google Gemini Chat Model: Uses Google Gemini for the Shopify and WooCommerce agents. Tools & Data: Shopify Tool: To get products and order information from Shopify. WooCommerce Tool: To get products and order information from WooCommerce. GraphQL Tool: For advanced, custom queries to the Shopify API. Pinecone Vector Store: To retrieve context for the RAG agent. AWS Bedrock Embeddings: To create vector embeddings for Pinecone. Logic & Memory: IF Node: To conditionally route the workflow. Merge Node: To consolidate the different branches before ending. Window Buffer Memory: Four nodes to provide conversational memory to each agent. Setup Guide 🛠️ To use this workflow, you'll need to configure several nodes with your own credentials and settings. 1\. AI Model Credentials OpenAI: Create an API key in your OpenAI Platform dashboard. Add this credential to the Router Model and GPT-4o-mini nodes. Google Gemini: Create an API key in your Google AI Studio dashboard. Add this credential to the Shopify Chat Model and WooCommerce Chat Model nodes. 2\. E-Commerce Platform Credentials Shopify: You will need a Shopify Access Token. Follow the n8n documentation to generate one. Add the credential to the Fetch All Products and Get Order info nodes. WooCommerce: Create API credentials from your WordPress dashboard. Add the credential to the Fetch All Products2 and Fetch Order Details nodes. 3\. RAG System Credentials (Pinecone & AWS) Pinecone: Sign up for a Pinecone account and create an API key. Add your Pinecone credentials in n8n. In the Pinecone Vector Store node, set the pineconeIndex to the name of your index. You must have a pre-existing index with data for the RAG to work. AWS: Create an AWS account and an IAM user with programmatic access to Amazon Bedrock. Add your AWS credentials in n8n. Select your AWS credentials in the AWS Bedrock Embeddings node. 4\. GraphQL Node Configuration In the GraphQL node, you must update the endpoint URL. Replace the placeholder https://{subdomain}.myshopify.com/admin/api/2025-04/graphql.json with your own Shopify store's GraphQL API endpoint.
by Greypillar
How it works • RSS feed monitors your blog for new posts automatically • Extracts and cleans full article content from the blog post • AI Chain (GPT-4o) transforms content into 5 platform-optimized formats (LinkedIn, Twitter, Instagram, Email, Video) • Unsplash API suggests relevant images for each content piece • Slack notification alerts content team with preview of all formats • Airtable logs everything for content calendar tracking • Optional auto-posting to LinkedIn and Twitter (disabled by default) • Structured output parser ensures all 5 formats are generated correctly with proper character limits Set up steps • Time to set up: 10-15 minutes • Replace RSS feed URL with your blog's feed (common formats: /feed, /rss, /feed.xml) • Get Slack channel ID for content team notifications • Create Airtable base with 14 columns (Original_Title, Original_URL, Published_Date, LinkedIn_Post, LinkedIn_Hashtags, Twitter_Thread, Twitter_Hashtags, Instagram_Caption, Instagram_Hashtags, Email_Subject, Email_Body, Video_Script, Suggested_Images, Status) • Add credentials: OpenAI (GPT-4o), Unsplash API, Slack OAuth2, Airtable Token • Replace placeholder IDs in Slack and Airtable nodes • Optional: Enable LinkedIn/Twitter auto-posting nodes and add OAuth2 credentials What you'll need • OpenAI API - GPT-4o access for AI content repurposing • Unsplash API - Free tier available for image suggestions • Slack - Standard workspace for team notifications • Airtable - Free plan works for content tracking • Blog with RSS feed - WordPress, Ghost, Medium, Webflow all supported • LinkedIn/Twitter OAuth2 (optional) - For auto-posting feature Who this is for Content creators, marketing teams, and agencies that want to maximize content ROI by automatically repurposing blog posts into platform-specific content. Perfect for B2B companies publishing regular blog content who need consistent multi-platform presence without manual reformatting.
by Didac Fernandez
Nova AI Content Marketing Agent - LinkedIn & Facebook Automation This n8n template demonstrates how to create a complete AI-powered social media content creation and scheduling system that generates platform-optimized posts for LinkedIn and Facebook with custom images and human approval workflows. Possible use cases: Generate a full week of social media content from a single brand brief Create platform-specific content that maintains brand voice consistency Automate image generation with AI while maintaining quality control Schedule approved content across multiple social platforms Track and organize all content in centralized spreadsheets How it works The automation starts with a form submission collecting 10 brand variables (name, industry, demographics, etc.) Nova AI Agent analyzes the brand information and generates 6 distinct social media posts (3 LinkedIn professional, 3 Facebook community-focused) Content is split by platform and routed to separate image generation workflows Google Imagen 4 Ultra creates custom visuals for each post with platform-specific aspect ratios Each generated image is sent to Slack for human approval via interactive forms If feedback is provided, NanoBanana AI edits the image based on natural language instructions Approved images are uploaded to Google Drive with organized naming conventions All content data is logged to Google Sheets with image URLs and scheduling information Final posts are scheduled via Late API to respective social platforms The workflow loops through each post individually for quality control Requirements OpenRouter API credentials for GPT-5 Mini access Replicate API key for Google Imagen 4 Ultra and NanoBanana Slack OAuth2 credentials with bot permissions Google Drive OAuth2 credentials Google Sheets API access GetLate API key connected to LinkedIn and Facebook accounts Perplexity API for research enhancement (optional) HOW TO USE STEP 1 - Setup Form and Brand Variables Configure the Form Trigger webhook URL for brand data collection Update the 10 form fields with your specific industry placeholders Test the form submission to ensure data flows correctly STEP 2 - Configure AI Services Add your OpenRouter API credentials to both Chat Model nodes Add your Replicate API key to the HTTP Header Auth credential Configure Perplexity API credentials for research functionality Set up custom session keys for memory management STEP 3 - Setup Approval Workflow Add Slack OAuth2 credentials to both "Send message and wait" nodes Update the Slack channel ID to your preferred approval channel Configure the custom form fields for approval/feedback collection STEP 4 - Configure Storage and Scheduling Add Google Drive OAuth2 credentials and update the target folder ID Add Google Sheets credentials and update the spreadsheet ID Get your Late API key from getlate.dev and add to HTTP Header Auth Update the Late accountId in both Schedule Post nodes with your platform IDs STEP 5 - Customize Content Strategy Modify the Nova system prompt to match your brand voice requirements Adjust the visual style requirements in the AI Agent configuration Update posting date logic and timezone settings as needed Test the complete workflow with sample brand data
by Lucía Maio Brioso
🧑💼 Who is this for? This workflow is for any YouTube user who wants to bulk delete all playlists from their own channel — whether to start fresh, clean up old content, or prepare the account for a new purpose. It’s useful for: Creators reorganizing their channel People transferring content to another account Anyone who wants to avoid deleting playlists manually one by one 🧠 What problem is this workflow solving? YouTube does not offer a built-in way to delete multiple playlists at once. If you have dozens or hundreds of playlists, removing them manually is extremely time-consuming. This workflow automates the entire deletion process in seconds, saving you hours of repetitive effort. ⚙️ What this workflow does Connects to your YouTube account Fetches all playlists you’ve created (excluding system playlists) Deletes them one by one** automatically > ⚠️ This action is irreversible. Once a playlist is deleted, it cannot be recovered. Use with caution. 🛠️ Setup 🔐 Create a YouTube OAuth2 credential in n8n for your channel. 🧭 Assign the credential to both YouTube nodes. ✅ Click “Test workflow” to execute. > 🟨 By default, this workflow deletes everything. If you want to be more selective, see the customization tips below. 🧩 How to customize this workflow to your needs ✅ Add a confirmation flag Insert a Set node with a custom field like confirm_delete = true, and follow it with an IF node to prevent accidental execution. ✂️ Delete only some playlists Add a Filter node after fetching playlists — you can match by title, ID, or keyword (e.g. only delete playlists containing “old”). 🛑 Add a pause before deletion Insert a Wait or NoOp node to give you a moment to cancel before it runs. 🔁 Adapt to scheduled cleanups Use a Cron trigger if you want to periodically clear temporary playlists.
by Sam Nesler
Syncs assignments and completion states to and fro between Canvas LMS and a Notion database. Automatically triggers every 2 hours during the schoolday by default (meaning 7 times a day), but also supports manual refreshing via webhooks. Setup You'll need a few things to get started: A Canvas API key. You can generate one by going to your Canvas account settings and clicking on the "New Access Token" button. The URL looks like https://canvas.wisc.edu/profile/settings You'll also need to replace URLs in Canvas nodes with your institution's domain, unless you're a student at UW-Madison. Canvas nodes are all the HTTP Request nodes except the one labelled "OpenAI Categorization", which is an OpenAI node and will require a key in a later step. A Notion integration token. You can find this by going to your Notion integrations page and clicking "Create new integration". You can make it a "Internal Integration". A Notion database to sync to. I made a template for use with the workflow, but you can use any database that has the following fields: Status (status): Status with at least the options "Not Started" and "Completed" - assignments start out "Not Started", and are marked "Completed" when they are submitted on Canvas. Estimate (select): Select with at least the options "XS", "S", "M", "L", "XL" - this is where the estimated time to complete the assignment will be stored. Even if you don't use AI, they'll start out as "M" Priority (select): Select with at least the options "Could Do", "Should Do", "Must Do" - assignments start out "Should Do" ID (text): this is where the ID of the assignment will be stored. We use this to sync without having a database on the server Due Date (date): this is where the due date of the assignment will be stored Class (text): this is where the name of the class will be stored Link (URL): this is where the link to the assignment will be stored The ID of the Notion database you want to sync to. You can find this by clicking "Share" in the top right of your database and copying the link. The ID is the part of the link that comes after https://www.notion.so/ and before ?v=. So for https://www.notion.so/tsuniiverse/1976e99d91128076b034e7379464560f?v=1976e99d911281e7bd4b000c2cbec692&pvs=4, the ID would be 1976e99d91128076b034e7379464560f. An OpenAI key for assignment length estimation or disable the node. Manual Refreshing Embed the production URL from the Webhook Trigger inside a "toggle list" or "toggle heading" inside Notion, then expand the heading to refresh, like so:
by Solomon
This n8n workflow automates lead extraction from Google Maps, enriches data with AI, and stores results for cold outreach. It uses the Bright Data community node and Bright Data MCP for scraping and AI message generation. How it works Form Submission User provides Google Maps starting location, keyword and country. Bright Data Scraping Bright Data community node triggers a Maps scraping job, monitors progress, and downloads results. AI Message Generation Uses Bright Data MCP with LLMs to create a personalized cold call script and talking points for each lead. Database Storage Enriched leads and scripts are upserted to Supabase. How to use Set up all the credentials, create your Postgres table and submit the form. The rest happens automatically. Requirements LLM account (OpenAI, Gemini…) for API usage. Bright Data account for API and MCP usage. Supabase account (or other Postgres database) to store information.
by David Olusola
How It Works The workflow is an automated appointment reminder system built on n8n. Here is a step-by-step breakdown of its process: Reminder Webhook This node acts as the entry point for the workflow. It's a unique URL that waits for data to be sent to it from an external application, such as a booking or scheduling platform. When a new appointment is created in that system, it sends a JSON payload to this webhook. Extract Appointment Data This is a Code node that processes the incoming data. It's a critical step that: Extracts the customer's name, phone number, appointment time, and service from the webhook's JSON payload. Includes validation to ensure a phone number is present, throwing an error if it's missing. Formats the raw appointment time into a human-readable string for the SMS message. Send SMS Reminder This node uses your Twilio credentials to send an SMS message. It dynamically constructs the message using the data extracted in the previous step. The message is personalized with the customer's name and includes the formatted appointment details. Setup Instructions Import the Workflow Copy the JSON code from the Canvas and import it into your n8n instance. Connect Your Twilio Account Click on the "Send SMS Reminder" node. In the "Credentials" section, you will need to either select your existing Twilio account or add new credentials by providing your Account SID and Auth Token from your Twilio console. Find the Webhook URL Click on the "Reminder Webhook" node. The unique URL for this workflow will be displayed. Copy this URL. Configure Your Booking System Go to your booking or scheduling platform (e.g., Calendly, Acuity). In the settings or integrations section, find where you can add a new webhook. Paste the URL you copied from n8n here. You'll need to map the data fields from your booking system (like customer name, phone, etc.) to match the expected format shown in the comments of the "Extract Appointment Data" node. Once these steps are complete, your workflow will be ready to automatically send SMS reminders whenever a new appointment is created.
by Angel Menendez
Streamline Case Management in TheHive via Slack! Our TheHive Slack Integration empowers SOC analysts by allowing them to efficiently manage and update case attributes directly within Slack, reducing the need to switch contexts and enhancing response time. Key Features: Direct Case Management**: Modify case details such as assignee, severity, status, and more through intuitive form inputs embedded within Slack messages. Seamless Integration**: Assumes matching email addresses between TheHive and Slack users for straightforward assignee updates. Note: Ensure email consistency to avoid assignment errors. Instant Case Actions**: Quickly close cases as false positives or adjust threat levels with minimal clicks, directly impacting case status in TheHive and reflecting updates immediately in Slack. Task Management**: Add tasks to cases through a user-friendly modal popup, fostering better task tracking and delegation within your team. Operational Benefits: Efficiency**: Enables analysts to perform multiple case actions without leaving Slack, streamlining workflows and saving valuable time. Accuracy**: Reduces the chances of human error by providing a controlled interface for case updates. Agility**: Enhances the SOC team's agility by providing tools for rapid response and case management, crucial for effective security operations. Setup Tips: Verify that all SOC team members have matching email IDs in TheHive and Slack. Familiarize your team with the Slack form inputs and ensure they understand the importance of accurate data entry. Regularly review and update the integration settings to accommodate any changes in your security operations protocols. Need Help? For detailed setup instructions or troubleshooting, refer to our Integration Guide or reach out on our Support Forum. Leverage this integration to maximize your SOC team's efficiency and responsiveness, ensuring that case management is as streamlined and effective as possible.