by Didac Fernandez
Nova AI Content Marketing Agent - LinkedIn & Facebook Automation This n8n template demonstrates how to create a complete AI-powered social media content creation and scheduling system that generates platform-optimized posts for LinkedIn and Facebook with custom images and human approval workflows. Possible use cases: Generate a full week of social media content from a single brand brief Create platform-specific content that maintains brand voice consistency Automate image generation with AI while maintaining quality control Schedule approved content across multiple social platforms Track and organize all content in centralized spreadsheets How it works The automation starts with a form submission collecting 10 brand variables (name, industry, demographics, etc.) Nova AI Agent analyzes the brand information and generates 6 distinct social media posts (3 LinkedIn professional, 3 Facebook community-focused) Content is split by platform and routed to separate image generation workflows Google Imagen 4 Ultra creates custom visuals for each post with platform-specific aspect ratios Each generated image is sent to Slack for human approval via interactive forms If feedback is provided, NanoBanana AI edits the image based on natural language instructions Approved images are uploaded to Google Drive with organized naming conventions All content data is logged to Google Sheets with image URLs and scheduling information Final posts are scheduled via Late API to respective social platforms The workflow loops through each post individually for quality control Requirements OpenRouter API credentials for GPT-5 Mini access Replicate API key for Google Imagen 4 Ultra and NanoBanana Slack OAuth2 credentials with bot permissions Google Drive OAuth2 credentials Google Sheets API access GetLate API key connected to LinkedIn and Facebook accounts Perplexity API for research enhancement (optional) HOW TO USE STEP 1 - Setup Form and Brand Variables Configure the Form Trigger webhook URL for brand data collection Update the 10 form fields with your specific industry placeholders Test the form submission to ensure data flows correctly STEP 2 - Configure AI Services Add your OpenRouter API credentials to both Chat Model nodes Add your Replicate API key to the HTTP Header Auth credential Configure Perplexity API credentials for research functionality Set up custom session keys for memory management STEP 3 - Setup Approval Workflow Add Slack OAuth2 credentials to both "Send message and wait" nodes Update the Slack channel ID to your preferred approval channel Configure the custom form fields for approval/feedback collection STEP 4 - Configure Storage and Scheduling Add Google Drive OAuth2 credentials and update the target folder ID Add Google Sheets credentials and update the spreadsheet ID Get your Late API key from getlate.dev and add to HTTP Header Auth Update the Late accountId in both Schedule Post nodes with your platform IDs STEP 5 - Customize Content Strategy Modify the Nova system prompt to match your brand voice requirements Adjust the visual style requirements in the AI Agent configuration Update posting date logic and timezone settings as needed Test the complete workflow with sample brand data
by Ferenc Erb
Overview Transform your Bitrix24 Open Line channels with an intelligent chatbot that leverages Retrieval-Augmented Generation (RAG) technology to provide accurate, document-based responses to customer inquiries in real-time. Use Case This workflow is designed for organizations that want to enhance their customer support capabilities in Bitrix24 by providing automated, knowledge-based responses to customer inquiries. It's particularly useful for: Customer service teams handling repetitive questions Support departments with extensive documentation Sales teams needing quick access to product information Organizations looking to provide 24/7 customer support What This Workflow Does Smart Document Processing Automatically processes uploaded PDF documents Splits documents into manageable chunks Generates vector embeddings for semantic understanding Indexes content for efficient retrieval AI-Powered Responses Utilizes Google Gemini AI to generate natural language responses Constructs answers based on relevant document content Maintains conversation context for coherent interactions Provides fallback responses when information is not available Vector Database Integration Stores document embeddings in Qdrant vector database Enables semantic search beyond simple keyword matching Retrieves the most relevant information for each query Maintains a persistent knowledge base that grows over time Webhook Handler Processes incoming messages from Bitrix24 Open Line channels Handles authentication and security validation Routes different types of events to appropriate handlers Manages session and conversation state Event Routing Intelligently routes different event types: ONIMBOTMESSAGEADD: Processes new user messages ONIMBOTJOINCHAT: Handles bot joining a conversation ONAPPINSTALL: Manages application installation ONIMBOTDELETE: Handles bot deletion Document Management Organizes processed documents in designated folders Tracks document processing status Moves indexed documents to appropriate locations Maintains document metadata for reference Interactive Menu Provides menu-based options for common user requests Customizable menu items and responses Easy navigation for users seeking specific information Fallback to operator option when needed Technical Architecture Components Webhook Handler: Receives and validates incoming requests from Bitrix24 Credential Manager: Securely manages authentication tokens and API keys Event Router: Directs events to appropriate processing functions Document Processor: Handles document loading, chunking, and embedding Vector Store: Qdrant database for storing and retrieving document embeddings Retrieval System: Searches for relevant document chunks based on user queries LLM Integration: Google Gemini model for generating natural language responses Response Manager: Formats and sends responses back to Bitrix24 Integration Points Bitrix24 API**: For bot registration, message handling, and user interaction Ollama API**: For generating document embeddings Qdrant API**: For vector storage and retrieval Google Gemini API**: For AI-powered response generation Setup Instructions Prerequisites Active Bitrix24 account with Open Line channels enabled Access to n8n workflow system Ollama API credentials Qdrant vector database access Google Gemini API key Configuration Steps Initial Setup Import the workflow into your n8n instance Configure credentials for all services Set up webhook endpoints Bitrix24 Configuration Create a new Bitrix24 application Configure webhook URLs Set appropriate permissions Install the application to your Bitrix24 account Document Storage Create a designated folder in Bitrix24 for knowledge base documents Configure folder paths in the workflow settings Upload initial documents to be processed Bot Configuration Customize bot name, avatar, and description Configure welcome messages and menu options Set up fallback responses Testing Verify successful installation Test document processing pipeline Send test queries to evaluate response qu
by Sam Nesler
Syncs assignments and completion states to and fro between Canvas LMS and a Notion database. Automatically triggers every 2 hours during the schoolday by default (meaning 7 times a day), but also supports manual refreshing via webhooks. Setup You'll need a few things to get started: A Canvas API key. You can generate one by going to your Canvas account settings and clicking on the "New Access Token" button. The URL looks like https://canvas.wisc.edu/profile/settings You'll also need to replace URLs in Canvas nodes with your institution's domain, unless you're a student at UW-Madison. Canvas nodes are all the HTTP Request nodes except the one labelled "OpenAI Categorization", which is an OpenAI node and will require a key in a later step. A Notion integration token. You can find this by going to your Notion integrations page and clicking "Create new integration". You can make it a "Internal Integration". A Notion database to sync to. I made a template for use with the workflow, but you can use any database that has the following fields: Status (status): Status with at least the options "Not Started" and "Completed" - assignments start out "Not Started", and are marked "Completed" when they are submitted on Canvas. Estimate (select): Select with at least the options "XS", "S", "M", "L", "XL" - this is where the estimated time to complete the assignment will be stored. Even if you don't use AI, they'll start out as "M" Priority (select): Select with at least the options "Could Do", "Should Do", "Must Do" - assignments start out "Should Do" ID (text): this is where the ID of the assignment will be stored. We use this to sync without having a database on the server Due Date (date): this is where the due date of the assignment will be stored Class (text): this is where the name of the class will be stored Link (URL): this is where the link to the assignment will be stored The ID of the Notion database you want to sync to. You can find this by clicking "Share" in the top right of your database and copying the link. The ID is the part of the link that comes after https://www.notion.so/ and before ?v=. So for https://www.notion.so/tsuniiverse/1976e99d91128076b034e7379464560f?v=1976e99d911281e7bd4b000c2cbec692&pvs=4, the ID would be 1976e99d91128076b034e7379464560f. An OpenAI key for assignment length estimation or disable the node. Manual Refreshing Embed the production URL from the Webhook Trigger inside a "toggle list" or "toggle heading" inside Notion, then expand the heading to refresh, like so:
by Angel Menendez
Streamline Case Management in TheHive via Slack! Our TheHive Slack Integration empowers SOC analysts by allowing them to efficiently manage and update case attributes directly within Slack, reducing the need to switch contexts and enhancing response time. Key Features: Direct Case Management**: Modify case details such as assignee, severity, status, and more through intuitive form inputs embedded within Slack messages. Seamless Integration**: Assumes matching email addresses between TheHive and Slack users for straightforward assignee updates. Note: Ensure email consistency to avoid assignment errors. Instant Case Actions**: Quickly close cases as false positives or adjust threat levels with minimal clicks, directly impacting case status in TheHive and reflecting updates immediately in Slack. Task Management**: Add tasks to cases through a user-friendly modal popup, fostering better task tracking and delegation within your team. Operational Benefits: Efficiency**: Enables analysts to perform multiple case actions without leaving Slack, streamlining workflows and saving valuable time. Accuracy**: Reduces the chances of human error by providing a controlled interface for case updates. Agility**: Enhances the SOC team's agility by providing tools for rapid response and case management, crucial for effective security operations. Setup Tips: Verify that all SOC team members have matching email IDs in TheHive and Slack. Familiarize your team with the Slack form inputs and ensure they understand the importance of accurate data entry. Regularly review and update the integration settings to accommodate any changes in your security operations protocols. Need Help? For detailed setup instructions or troubleshooting, refer to our Integration Guide or reach out on our Support Forum. Leverage this integration to maximize your SOC team's efficiency and responsiveness, ensuring that case management is as streamlined and effective as possible.
by Angel Menendez
Upload Public-Facing Images to an S3 Cloudflare Bucket via Slack Modal 🛠 Who is this for? This workflow is for teams that use Slack for internal communication and need a streamlined way to upload public-facing images to an S3 Cloudflare bucket. It's especially beneficial for DevOps, marketing, or content management teams who frequently share assets and require efficient cloud storage integration. 💡 What problem does this workflow solve? Manually uploading images to cloud storage can be time-consuming and disruptive, especially if you're already working in Slack. This workflow automates the process, allowing you to upload images directly from Slack via a modal popup. It reduces friction and keeps your workflow within a single platform. 🔍 What does this workflow do? This workflow connects Slack with an S3 Cloudflare bucket to simplify the image-uploading process: Slack Modal Interaction**: Users trigger a Slack modal to select images for upload. Dynamic Folder Management**: Choose to create a new folder or use an existing one for uploads. S3 Integration**: Automatically uploads the images to a specified S3 Cloudflare bucket. Slack Confirmation**: After upload, Slack sends a confirmation with the uploaded file URLs. 🚀 Setup Instructions Prerequisites Slack Bot with the following permissions: commands files:write files:read chat:write Cloudflare S3 Credentials: Create an API token with write access to your S3 bucket. n8n Instance: Ensure n8n is properly set up with webhook capabilities. Steps Configure Slack Bot: Set up a Slack app and enable the Events API. Add your n8n webhook URL to the Events Subscription section. Add Credentials: Add your Slack API and S3 Cloudflare credentials to n8n. Customize the Workflow: Open the Idea Selector Modal node and update folder options to suit your needs. Update the Post Image to Channel node with your Slack channel ID. Deploy the Workflow: Activate the workflow and test by triggering the Slack modal. 🛠 How to Customize This Workflow Adjust the Slack Modal You can modify the modal layout in the Idea Selector Modal node to add additional fields or adjust the styling. Change the Bucket Structure Update the Upload to S3 Bucket node to customize the folder paths or change naming conventions. 🔗 References and Helpful Links Slack API Documentation Cloudflare S3 Setup n8n Documentation 📓 Workflow Notes Key Features: Slack Integration**: Uses Slack modal interactions to streamline the upload process. Cloud Storage**: Automatically uploads to a Cloudflare S3 bucket. User Feedback**: Sends a Slack message with file URLs upon successful upload. Setup Dependencies: Slack API token Cloudflare S3 credentials n8n webhook configuration Sticky Notes Included Sticky notes are embedded within the workflow to guide you through configuration and explain node functionality. 🌟 Why Use This Workflow? This workflow keeps your image-uploading process intuitive, efficient, and fully integrated with tools you already use. By leveraging n8n's flexibility, you can ensure smooth collaboration and quick sharing of public-facing assets without switching contexts.
by Alex Kim
Printify Automation - Update Title and Description Workflow This n8n workflow automates the process of retrieving products from Printify, generating optimized product titles and descriptions, and updating them back to the platform. It leverages OpenAI for content generation and integrates with Google Sheets for tracking and managing updates. Features Integration with Printify**: Fetch shops and products through Printify's API. AI-Powered Optimization**: Generate engaging product titles and descriptions using OpenAI's GPT model. Google Sheets Tracking**: Log and manage updates in Google Sheets. Custom Brand Guidelines**: Ensure consistent tone by incorporating brand-specific instructions. Loop Processing**: Iteratively process each product in batches. Workflow Structure Nodes Overview Manual Trigger: Manually start the workflow for testing purposes. Printify - Get Shops: Retrieves the list of shops from Printify. Printify - Get Products: Fetches product details for each shop. Split Out: Breaks down the product list into individual items for processing. Loop Over Items: Iteratively processes products in manageable batches. Generate Title and Desc: Uses OpenAI GPT to create optimized product titles and descriptions. Google Sheets Integration: Trigger: Monitors Google Sheets for changes. Log Updates: Records product updates, including old and new titles/descriptions. Conditional Logic: If Nodes: Ensure products are ready for updates and stop processing once completed. Printify - Update Product: Sends updated titles and descriptions back to Printify. Brand Guidelines + Custom Instructions: Sets brand tone and seasonal instructions. Setup Instructions Prerequisites n8n Instance: Ensure n8n is installed and configured. Printify API Key: Obtain an API key from your Printify account. Add it to n8n under HTTP Header Auth. OpenAI API Key: Obtain an API key from OpenAI. Add it to n8n under OpenAI API. Google Sheets Integration: Share your Google Sheets with the Google API service account. Configure Google Sheets credentials in n8n. Workflow Configuration Set Brand Guidelines: Update the Brand Guidelines + Custom Instructions node with your brand name, tone, and seasonal instructions. Batch Size: Configure the Loop Over Items node for optimal batch sizes. Google Sheets Configuration: Set the correct Google Sheets document and sheet names in the integration nodes. Run the Workflow: Start manually or configure the workflow to trigger automatically. Key Notes Customization**: Modify API calls to support other platforms like Printful or Vistaprint. Scalability**: Use batch processing for efficient handling of large product catalogs. Error Handling**: Configure retries or logging for any failed nodes. Output Examples Optimized Content Example Input Title**: "Classic White T-Shirt" Generated Title**: "Stylish Classic White Tee for Everyday Wear" Input Description**: "Plain white T-shirt made of cotton." Generated Description**: "Discover comfort and style with our classic white tee, crafted from premium cotton for all-day wear. Perfect for casual outings or layering." Next Steps Monitor Updates: Use Google Sheets to review logs of updated products. Expand Integration: Add support for more Printify shops or integrate with other platforms. Enhance AI Prompts: Customize prompts for different product categories or seasonal needs. Feel free to reach out for additional guidance or troubleshooting!
by Alexandra Spalato
YouTube Content Repurposing Automation Who's it for This workflow is for content creators, marketers, agencies, coaches, and businesses who want to maximize their YouTube content ROI by automatically generating multiple content assets from single videos. It's especially useful for professionals who want to: Repurpose YouTube videos into blogs, social posts, newsletters, and tutorials without manual effort Scale their content production across multiple channels and platforms Create consistent, high-quality content derivatives while saving time and resources Build automated content systems that generate multiple revenue streams Maintain active presence across social media, email, and blog platforms simultaneously What problem is this workflow solving Content creators face significant challenges when trying to maximize their video content: Time-intensive manual repurposing: Converting one YouTube video into multiple content formats traditionally requires hours of manual writing, editing, and formatting across different platforms. Inconsistent content quality: Manual repurposing often leads to varying quality levels and missed opportunities to optimize content for specific platforms. High costs for content services: Hiring ghostwriters or content agencies to repurpose videos can cost thousands of dollars monthly. Scaling bottlenecks: Manual processes prevent creators from efficiently scaling their content across multiple channels and formats. This workflow solves these problems by automatically extracting YouTube video transcripts, using AI to generate multiple high-quality content formats (tutorials, blog posts, social media content, newsletters), and organizing everything in Airtable for easy management and distribution. How it works Automated Video Processing Starts with a manual trigger and retrieves YouTube URLs from your Airtable configuration, processing only videos marked as "selected" while filtering out those marked for deletion. Intelligent Transcript Extraction Uses Scrape Creator API to extract video transcripts, automatically cleaning and formatting the text for optimal AI processing and content generation. Multi-Format Content Generation Leverages OpenRouter models, o you can easily test different AI models and choose the one that delivers the best results for your needs: Step-by-step tutorials with code snippets and technical details YouTube scripts with hooks, titles, and conclusions Blog posts optimized for lead generation Structured summaries with key takeaways LinkedIn posts with engagement triggers Newsletter content for email marketing Twitter/X posts for social media Smart Content Filtering Processes only the content types you've selected in Airtable, ensuring efficient resource usage and faster execution times. Automated Content Organization Matches and combines all generated content pieces by URL, then updates your Airtable with complete, ready-to-use content assets organized by type and source video. How to set up Required credentials OpenRouter API key** Airtable Personal Access Token** Scrape Creators API Key** - For YouTube transcript extraction and processing Airtable base setup Create an Airtable base with one main table: Videos Table: title** (Single line text): Video title for reference url** (URL): YouTube video URL to process Status** (Single select): Options: "selected", "delete", "processed" output** (Multiple select): Content types to generate summary tutorial blog-post linkedin newsletter tweeter youtube summary** (Long text): Generated video summary tutorial** (Long text): Generated step-by-step tutorial key_take_aways** (Long text): Extracted key insights blog_post** (Long text): Generated blog post content linkedin** (Long text): LinkedIn post content newsletter** (Long text): Email newsletter content tweeter** (Long text): Twitter/X post content youtube_titles** (Long text): YouTube video title suggestions youtube_hook** (Long text): Video opening hooks youtube_steps** (Long text): Video step breakdowns youtube_conclusion** (Long text): Video ending/CTAs API Configuration Scrape Creator Setup: Sign up for Scrape Creator API Obtain your API key from the dashboard Configure the HTTP Request node with your credentials Set the endpoint to: https://api.scrapecreators.com/v1/youtube/video/transcript OpenAI Setup: Create an OpenRouter account and generate an API key Workflow Configuration Import the workflow JSON into your n8n instance Update all credential references with your API keys Configure the Airtable nodes with your base and table IDs Test the workflow with a single video URL first Requirements n8n instance** (self-hosted or cloud) Active API subscriptions** for OpenRouter (or the LLM or your choice), Airtable, and Scrape Creator YouTube video URLs** - Must be publicly accessible videos with available transcripts Airtable account** - Free tier sufficient for most use cases How to customize the workflow Modify content generation prompts Edit the LLM Chain nodes to customize content style and format: Tutorial node**: Adjust technical depth and formatting preferences Blog post node**: Modify tone, length, and CTA strategies LinkedIn node**: Customize engagement hooks and professional tone Newsletter node**: Tailor subject lines and email marketing approach Adjust AI model selection Update the OpenRouter Chat Model to use different models Add new content formats Create additional LLM Chain nodes for new content types: Instagram captions TikTok scripts Podcast descriptions Course outlines
by Simon
Address Validation Workflow About This workflow automates the process of validating and correcting client shipping addresses in Billbee, ensuring accurate delivery information. It's ideal for e-commerce businesses looking to save time and reduce errors in their order fulfillment process. The workflow uses Billbee, an order management platform for small to medium-sized online retailers, and the Endereco API for address validation. Who Is This For? E-Commerce Businesses**: Streamline order fulfillment by automatically correcting common shipping address errors. Warehouse Teams**: Reduce manual work and ensure packages are shipped to the correct address. Small to Medium-Sized Retailers**: Businesses using Billbee to manage orders and requiring efficient, automated solutions for address validation. How it Works Trigger: Workflow starts via a Billbee Webhook when an order is imported. Fetch Data: Retrieve the client's shipping address using the Order ID. Validate Address: Send the address to the Endereco API for validation and correction (e.g., house number errors). Conditional Actions: Valid Address: Update the address in Billbee. Invalid Address: Tag the order with "Validation Error." Track Status: Add tags in Billbee for processed orders. Setup Steps API Keys: Obtain Billbee Developer/User API Keys and Endereco API Key. Billbee Rule: Create an automation rule: Trigger: Order imported. Action: Call External URL with OrderId to trigger n8n workflow. Optional: Use a secondary trigger (e.g., order state changes to "gepackt") for manual corrections. Customization Options Filter Delivery Addresses: Customize filters to exclude specific delivery types, such as pickup shops ("Postfiliale," "Paketshop," or "Packstation"). Filters can be adjusted within Billbee or in the workflow. Error Handling: Configure additional actions for orders that fail validation, such as notifying your team or flagging orders for manual review. Order Tags: Define custom tags in Billbee to better track order statuses (e.g., "Address Corrected," "Validation Error"). Trigger Types: Use additional triggers such as changes to order states (e.g., "gepackt" or "In Fulfillment") for manual corrections or validations. Address Fields: Modify the workflow to focus on specific address components, such as postal codes, city names, or country codes. Validation Rules: Adjust Endereco API settings or add custom logic to refine validation criteria based on your business needs. API Documentation Endereco**: Endereco API Docs Billbee**: Billbee API Docs
by Mark Shcherbakov
Video Guide I prepared a detailed guide explaining how to set up and implement this scenario, enabling you to chat with your documents stored in Supabase using n8n. Youtube Link Who is this for? This workflow is ideal for researchers, analysts, business owners, or anyone managing a large collection of documents. It's particularly beneficial for those who need quick contextual information retrieval from text-heavy files stored in Supabase, without needing additional services like Google Drive. What problem does this workflow solve? Manually retrieving and analyzing specific information from large document repositories is time-consuming and inefficient. This workflow automates the process by vectorizing documents and enabling AI-powered interactions, making it easy to query and retrieve context-based information from uploaded files. What this workflow does The workflow integrates Supabase with an AI-powered chatbot to process, store, and query text and PDF files. The steps include: Fetching and comparing files to avoid duplicate processing. Handling file downloads and extracting content based on the file type. Converting documents into vectorized data for contextual information retrieval. Storing and querying vectorized data from a Supabase vector store. File Extraction and Processing: Automates handling of multiple file formats (e.g., PDFs, text files), and extracts document content. Vectorized Embeddings Creation: Generates embeddings for processed data to enable AI-driven interactions. Dynamic Data Querying: Allows users to query their document repository conversationally using a chatbot. Setup N8N Workflow Fetch File List from Supabase: Use Supabase to retrieve the stored file list from a specified bucket. Add logic to manage empty folder placeholders returned by Supabase, avoiding incorrect processing. Compare and Filter Files: Aggregate the files retrieved from storage and compare them to the existing list in the Supabase files table. Exclude duplicates and skip placeholder files to ensure only unprocessed files are handled. Handle File Downloads: Download new files using detailed storage configurations for public/private access. Adjust the storage settings and GET requests to match your Supabase setup. File Type Processing: Use a Switch node to target specific file types (e.g., PDFs or text files). Employ relevant tools to process the content: For PDFs, extract embedded content. For text files, directly process the text data. Content Chunking: Break large text data into smaller chunks using the Text Splitter node. Define chunk size (default: 500 tokens) and overlap to retain necessary context across chunks. Vector Embedding Creation: Generate vectorized embeddings for the processed content using OpenAI's embedding tools. Ensure metadata, such as file ID, is included for easy data retrieval. Store Vectorized Data: Save the vectorized information into a dedicated Supabase vector store. Use the default schema and table provided by Supabase for seamless setup. AI Chatbot Integration: Add a chatbot node to handle user input and retrieve relevant document chunks. Use metadata like file ID for targeted queries, especially when multiple documents are involved. Testing Upload sample files to your Supabase bucket. Verify if files are processed and stored successfully in the vector store. Ask simple conversational questions about your documents using the chatbot (e.g., "What does Chapter 1 say about the Roman Empire?"). Test for accuracy and contextual relevance of retrieved results.
by David Olusola
How It Works The workflow is an automated appointment reminder system built on n8n. Here is a step-by-step breakdown of its process: Reminder Webhook This node acts as the entry point for the workflow. It's a unique URL that waits for data to be sent to it from an external application, such as a booking or scheduling platform. When a new appointment is created in that system, it sends a JSON payload to this webhook. Extract Appointment Data This is a Code node that processes the incoming data. It's a critical step that: Extracts the customer's name, phone number, appointment time, and service from the webhook's JSON payload. Includes validation to ensure a phone number is present, throwing an error if it's missing. Formats the raw appointment time into a human-readable string for the SMS message. Send SMS Reminder This node uses your Twilio credentials to send an SMS message. It dynamically constructs the message using the data extracted in the previous step. The message is personalized with the customer's name and includes the formatted appointment details. Setup Instructions Import the Workflow Copy the JSON code from the Canvas and import it into your n8n instance. Connect Your Twilio Account Click on the "Send SMS Reminder" node. In the "Credentials" section, you will need to either select your existing Twilio account or add new credentials by providing your Account SID and Auth Token from your Twilio console. Find the Webhook URL Click on the "Reminder Webhook" node. The unique URL for this workflow will be displayed. Copy this URL. Configure Your Booking System Go to your booking or scheduling platform (e.g., Calendly, Acuity). In the settings or integrations section, find where you can add a new webhook. Paste the URL you copied from n8n here. You'll need to map the data fields from your booking system (like customer name, phone, etc.) to match the expected format shown in the comments of the "Extract Appointment Data" node. Once these steps are complete, your workflow will be ready to automatically send SMS reminders whenever a new appointment is created.
by Angel Menendez
Analyze Emails for Security Insights Who is this for? This workflow is ideal for security teams, IT Ops professionals, and managed service providers (MSPs) responsible for monitoring and validating email traffic. It’s especially useful for organizations that need to identify potential phishing attempts, spam, or compromised accounts by analyzing email headers and IP reputation. What problem is this workflow solving? This workflow helps identify malicious or suspicious emails by verifying email authentication headers (SPF, DKIM, DMARC) and analyzing the reputation of the originating IP address. By automating these checks, it reduces manual analysis time and flags potential threats efficiently. What this workflow does Email Monitoring:** Polls a specified Microsoft Outlook folder for new emails in real-time. Header Analysis:** Retrieves and processes email headers to extract critical information such as authentication results and the sender’s IP address. IP Reputation Check:** Leverages external APIs (IP Quality Score and IP-API) to analyze the originating IP for potential spam or malicious activity. Authentication Validation:** Validates SPF, DKIM, and DMARC headers, determining if the email passes industry-standard authentication protocols. Data Aggregation and Reporting:** Combines all analyzed data into a unified format, ready for reporting or integration into downstream systems. Webhook Integration:** Outputs the findings via a webhook, enabling integration with alerting tools or security information and event management (SIEM) platforms. Setup Connect to Outlook: Configure the Microsoft Outlook trigger node with valid OAuth2 credentials. Specify the email folder to monitor for new messages. API Keys (Optional): Obtain an API key for IP Quality Score (https://ipqualityscore.com). Ensure the IP-API endpoint is accessible. This step is optional as ipqualityscore.com will provide a limited number of free lookups each month. See more details here. Webhook Configuration: Set up a webhook endpoint to receive the output of the workflow. Optional Adjustments: Customize polling intervals in the trigger node. Modify header filters or extend the validation logic as needed. How to customize this workflow to your needs Add Alerts:** Use the Respond to Webhook node to trigger notifications in Slack, email, or any other communication channel. Integrate with SIEM:** Forward the workflow output to SIEM tools like Splunk or ELK Stack for further analysis. Modify Validation Rules:** Update SPF, DKIM, or DMARC logic in the Set nodes to align with your organization’s security policies. Expand IP Analysis:** Add more APIs or services to enrich IP reputation data, such as VirusTotal or AbuseIPDB. This workflow provides a robust foundation for email security monitoring and can be tailored to fit your organization's unique requirements. With its modular design and integration options, it’s a versatile tool to enhance your cybersecurity operations.
by Angel Menendez
Analyze Emails for Security Insights Who is this for? This workflow is ideal for IT professionals, security analysts, and organizations looking to enhance their email security practices. It is particularly useful for those who need to analyze Gmail email headers for IP tracking, spoofing detection, and sender reputation assessment. What problem is this workflow solving? Email spoofing and phishing attacks are significant cybersecurity threats. By analyzing email headers, this workflow provides detailed insights into the email's origin, authentication status, and the reputation of the sending IP address. It helps detect potential spoofing attempts and assess the trustworthiness of incoming emails. What this workflow does This n8n workflow automates the process of analyzing email headers received in Gmail. It performs the following key functions: Triggering and Email Header Extraction: It monitors Gmail inboxes for new emails and extracts their headers for analysis. Authentication Analysis: It validates SPF, DKIM, and DMARC authentication results to ensure the email adheres to industry-standard security protocols. IP Analysis: The workflow extracts the originating IP address and evaluates its reputation and geographic details using external APIs. Reputation Scoring: It integrates with IP Quality Score to detect spam activity and assess the sender's reputation. Consolidation and Webhook Response: All results are aggregated into a single JSON response, making it easy to integrate with third-party platforms or tools for further automation. Setup Authenticate Gmail: Configure the Gmail Trigger node with your Gmail account credentials. API Keys (Optional): Obtain an API key for IP Quality Score (https://ipqualityscore.com). Ensure the IP-API endpoint is accessible. This step is optional as ipqualityscore.com will provide a limited number of free lookups each month. See more details here. Activate the Workflow: Ensure the workflow is active to process incoming emails in real-time. How to customize this workflow to your needs Add Alerts:** Use the Gmail - Respond to Webhook node to trigger notifications in Slack, email, or any other communication channel. Integrate with SIEM:** Forward the workflow output to SIEM tools like Splunk or ELK Stack for further analysis. Modify Validation Rules:** Update SPF, DKIM, or DMARC logic in the Set nodes to align with your organization’s security policies. Expand IP Analysis:** Add more APIs or services to enrich IP reputation data, such as VirusTotal or AbuseIPDB. This workflow provides a robust foundation for email security monitoring and can be tailored to fit your organization's unique requirements. With its modular design and integration options, it’s a versatile tool to enhance your cybersecurity operations.