by Gerald Denor
AI-Powered Proposal Generator - Sales Automation Workflow Overview This n8n workflow automates the entire proposal generation process using AI, transforming client requirements into professional, customized proposals delivered via email in seconds. Use Case Perfect for agencies, consultants, and sales teams who need to generate high-quality proposals quickly. Instead of spending hours writing proposals manually, this workflow captures client information through a web form and uses GPT-4 to generate contextually relevant, professional proposals. How It Works Form Trigger - Captures client information through a customizable web form OpenAI Integration - Processes form data and generates structured proposal content Google Drive - Creates a copy of your proposal template Google Slides - Populates the template with AI-generated content Gmail - Automatically sends the completed proposal to the client Key Features AI Content Generation**: Uses GPT-4 to create personalized proposal content Professional Templates**: Integrates with Google Slides for polished presentations Automated Delivery**: Sends proposals directly to clients via email Form Integration**: Captures all necessary client data through web forms Customizable Output**: Generates structured proposals with multiple sections Template Sections Generated Proposal title and description Problem summary analysis Three-part solution breakdown Project scope details Milestone timeline with dates Cost integration Requirements n8n instance** (cloud or self-hosted) OpenAI API key** for content generation Google Workspace account** for Slides and Gmail Basic n8n knowledge** for setup and customization Setup Complexity Intermediate - Requires API credentials setup and basic workflow customization Benefits Time Savings**: Reduces proposal creation from hours to minutes Consistency**: Ensures all proposals follow the same professional structure Personalization**: AI analyzes client needs for relevant content Automation**: Eliminates manual copy-paste and formatting work Scalability**: Handle multiple proposal requests simultaneously Customization Options Modify AI prompts for different industries or services Customize Google Slides template design Adjust form fields for specific information needs Personalize email templates and signatures Configure milestone templates for different project types Error Handling Includes basic error handling for API failures and form validation to ensure reliable operation. Security Notes All credentials have been removed from this template. Users must configure their own: OpenAI API credentials Google OAuth2 connections for Slides, Drive, and Gmail Form webhook configuration This workflow demonstrates practical AI integration in business processes and showcases n8n's capabilities for complex automation scenarios.
by JaredCo
Real-time Weather Forecasts with MCP Tools This n8n workflow demonstrates how to integrate real-time weather intelligence into any automation using the Model Context Protocol (MCP). Get current conditions and 5-day forecasts with natural language queries like "What's the weather like in Miami?" or "Will it rain next Tuesday in Seattle?" - all powered by live weather data and AI. Good to know No API keys required - uses hosted MCP weather server with built-in WorldWeatherOnline integration Provides current conditions and detailed 5-day forecasts Natural language queries work for any location worldwide Powered by WorldWeatherOnline - the world's most accurate weather system Fully preconfigured and ready to run out-of-the-box Enterprise-ready with error handling and rate limiting How it works Natural Language Input**: Receives weather queries via webhook, chat, email, or voice AI Agent Processing**: n8n Agent node interprets requests and determines: Location extraction from natural language Weather data type needed (current or 5-day forecast) Response formatting preferences MCP Weather Tool**: Live hosted server provides: Real-time current conditions (temperature, humidity, wind, conditions) 5-day detailed forecasts with daily highs/lows Weather descriptions and condition codes Powered by WorldWeatherOnline's premium data Intelligent Responses**: AI formats weather data into: Conversational natural language responses Structured data for downstream automation Action-triggering data for workflows How to use Import the workflow into n8n from the template Add your preferred AI model API key to the Agent node Customize the system prompt for your specific use case Connect to your preferred input/output channels Run and start querying weather with natural language Use Cases Smart Home Automation**: "Turn on sprinklers if no rain forecast for 3 days" Travel Planning**: "Check weather for my Paris trip next week" Event Management**: "Will outdoor wedding conditions be good Saturday?" Agriculture/Farming**: "Check 5-day forecast for planting schedule" Logistics**: "Delay shipping if severe weather forecast in delivery zone" Personal Assistant**: "Should I wear a jacket today in Chicago?" Sports/Recreation**: "Surf conditions and wind forecast for weekend" Construction**: "Safe working conditions for outdoor project this week" Requirements n8n instance (cloud or self-hosted) AI model provider account (OpenAI, Anthropic, Google, etc.) Internet connection for MCP weather server access Optional: Webhook endpoints for external integrations Customizing this workflow Location Intelligence**: Add geocoding for address-to-coordinates conversion Data Storage**: Save weather history to databases for trend analysis Dashboard Integration**: Connect to Grafana, Tableau, or custom visualizations Voice Integration**: Add speech-to-text for voice weather queries Scheduling**: Set up automated daily/weekly weather briefings Conditional Logic**: Trigger different actions based on weather conditions Sample Input/Output Natural Language Queries: "What's the weather like in Miami?" "Will it rain next Tuesday in Seattle?" "5-day forecast for London" "Temperature in Tokyo tomorrow" "Weather conditions for outdoor event Saturday" Rich Responses: { "location": "Miami, FL", "current": { "temperature": "78°F", "condition": "Partly Cloudy", "humidity": "65%", "wind": "10 mph SE" }, "forecast": { "today": "High 82°F, Low 71°F, 20% rain", "tomorrow": "High 85°F, Low 73°F, Sunny" }, "ai_summary": "Perfect beach weather in Miami today! Partly cloudy with comfortable temperatures and light winds." } Why This Workflow is Unique Zero Setup Weather Data**: No API key management - MCP server handles everything World-Class Accuracy**: Powered by WorldWeatherOnline's premium weather data AI-Powered Intelligence**: Natural language understanding of complex weather queries Enterprise Ready**: Built-in error handling, rate limiting, and reliability Global Coverage**: Worldwide weather data with location intelligence Action-Oriented**: Designed for automation decisions, not just information display Transform your automations with intelligent weather awareness powered by the world's most accurate weather system! 🧪 Setup Steps ✅ The Agent node is already configured: The system prompt is included The tool endpoint is pre-set All you need to do is: Add your AI model API key to the existing Agent credential Hit run and you're done ✅ 🔗 Full project link: Github: weathertrax-mcp-agent-demo
by Mohan Gopal
🧩 Workflow: Process Tour PDF from Google Drive to Pinecone Vector DB with OpenAI Embeddings Overview This workflow automates the process of extracting tour information from PDF files stored in a Google Drive folder, processes and vectorizes the extracted data, and stores it in a Pinecone vector database for efficient querying. This is especially useful for building AI-powered search or recommendation systems for travel packages. Setup: Prerequisites A folder in Google Drive with PDF tour package brochures. Pinecone account + API key OpenAI API key n8n cloud or self-hosted instance Workflow Setup Steps Trigger Manual Trigger (When clicking 'Test workflow'): Used for manual testing and execution of the workflow. Google Drive Integration Step 1: Store Tour Packages in PDF Format Upload your curated tour packages containing the tours, activities and sight-seeings in PDF format into a designated Google Drive folder. Step 2: Search Folder Node: PDF Tour Package Folder (Google Drive) This node searches the designated folder for files (filter by MIME type = application/pdf if needed). Step 3: Download PDFs Node: Download Package Files (Google Drive) Downloads each matching PDF file found in the previous step. Process Each PDF File Step 4: Loop Through Files Node: Loop Over each PDF file Iterates through each downloaded PDF file to extract, clean, split, and embed. Data Preparation & Embedding Step 5: Data Loader Node: Data Loader Reads each PDF’s content using a compatible loader. It passes clean raw text to the next node. Often integrated with document loaders like pdf-loader, Unstructured, or pdfplumber. Step 6: Recursive Text Splitter Node: Recursive Character Text Splitter Splits large chunks of text into manageable segments using overlapping window logic (e.g., 500 tokens with 50 token overlap). This ensures contextual preservation for long documents during embedding. Step 7: Generate Embeddings Node: Embeddings OpenAI Uses text-embedding-3-small model to vectorize the split chunks. Outputs vector representations for each content chunk. Store in Pinecone Step 8: Pinecone Vector Store Node: Pinecone Vector Store - Store... Stores each embedding along with its metadata (source PDF name, chunk ID, etc.). This becomes the basis for fast, semantic search via RAG workflows or agents. 🛠️ Tools & Nodes Used Google Drive (Search & Download) Searches for all PDF files in a specified Google Drive folder. Downloads each file for processing. SplitInBatches (Loop Over Items) Loops through each file found in the folder, ensuring each is processed individually. Default Data Loader (LangChain) Reads and extracts text from the PDF files. Recursive Character Text Splitter (LangChain) Splits the extracted text into manageable chunks for embedding. OpenAI Embeddings (LangChain) Converts each text chunk into a vector using OpenAI’s embedding model. Pinecone Vector Store (LangChain) Stores the resulting vectors in a Pinecone index for fast similarity search and querying. 🔗 Workflow Steps Explained Trigger: The workflow starts manually for testing or can be scheduled. Google Drive Search: Finds all PDF files in the specified folder. Loop Over Files: Each file is processed one at a time using the SplitInBatches node. Download File: Downloads the current PDF file from Google Drive. Extract Text: The Default Data Loader node reads the PDF and extracts its text content. *Text Splitting: * The Recursive Character Text Splitter breaks the text into chunks (e.g., 1000 characters with 50 overlap) to optimize embedding quality. **Vectorization: **Each chunk is sent to the OpenAI Embeddings node to generate vector representations. Store in Pinecone: The vectors are inserted into a Pinecone index, making them available for semantic search and recommendations. 🚀 What Can Be Improved in the Next Version? *Error Handling: * Add error handling nodes to manage failed downloads or extraction issues gracefully. File Type Filtering: Ensure only PDF files are processed by adding a filter node. Metadata Storage: Store additional metadata (e.g., file name, tour ID) alongside vectors in Pinecone for richer search results. *Parallel Processing: * Optimize for large folders by processing multiple files in parallel (with care for API rate limits). Automated Triggers: Replace manual trigger with a time-based or webhook trigger for full automation. Data Validation: Add checks to ensure extracted text contains valid tour data before vectorization. User Feedback: Integrate notifications (e.g., email or Slack) to inform when processing is complete or if issues arise. 💡 Summary This workflow demonstrates how n8n can orchestrate a powerful AI data pipeline using Google Drive, LangChain, OpenAI, and Pinecone. It’s a great foundation for building intelligent search or recommendation features for travel and tour data. Feel free to ask for more details or share your improvements! Let me know if you want to see a specific part of the workflow or need help with a particular node!
by Alex Dunlop
Who is this for? Professionals and individuals who receive high volumes of emails, those who want to automatically organize their Gmail inbox using AI classification. What problem is this workflow solving? Manual email sorting is time-consuming and inconsistent. This workflow automatically categorizes incoming emails into 8 predefined labels (To respond, FYI, Comment, Notification, Meeting update, Awaiting reply, Actioned, Marketing) to help maintain inbox zero and prioritize responses. What this workflow does Monitors Gmail for new incoming emails Uses AI to analyze email content and classify into appropriate categories Automatically applies the corresponding Gmail label Runs on a schedule to process emails consistently Setup Prerequisites n8n instance (cloud or self-hosted) Gmail account with API access enabled Access to an LLM provider (OpenAI, Anthropic Claude, or similar) Step-by-Step Configure Gmail Credentials Create Gmail Labels Configure LLM Chain Set Email Polling Schedule Test the Workflow Create Gmail Labels Before running the workflow, create these 8 labels in your Gmail account: To respond FYI Comment Notification Meeting update Awaiting reply Actioned Marketing How to customize this workflow to your needs Modify Classification Categories To change the email categories, update two places: In the AI prompt (Basic LLM Chain node): Your new category - Description of what emails fit here Another category - Description [... continue with your categories] In Gmail labels: Create corresponding labels in your Gmail account with the exact same names and numbering. Adjust Classification Rules The AI prompt contains specific rules for each category. To modify: Edit the "Key classification rules" section in the LLM prompt Add examples of emails that should go into each category Specify edge cases and how they should be handled Change Email Sources Currently monitors all incoming emails. To filter specific emails: In the Gmail Trigger node, add filters such as: from:specific-sender@domain.com subject:contains-keyword -label:already-processed You can also change this use Outlook Modify Polling Frequency More frequent**: Add multiple poll times (e.g., 9 AM, 12 PM, 6 PM) Less frequent**: Change to once daily or weekly Real-time**: Switch to webhook-based triggering (requires Gmail API setup) I choose daily for cost.
by Lucas Peyrin
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This workflow demonstrates how to create a resilient AI Agent that automatically falls back to a different language model if the primary one fails. This is useful for handling API errors, rate limits, or model outages without interrupting your process. State Initialization: The Agent Variables node initializes a fail_count to 0. This counter tracks how many models have been attempted. Dynamic Model Selection: The Fallback Models (a LangChain Code node) acts as a router. It receives a list of all connected AI models and, based on the current fail_count, selects which one to use for this attempt (0 for the first model, 1 for the second, etc.). Agent Execution: The AI Agent node attempts to run your prompt using the model selected by the router. The Fallback Loop: On Success: The workflow completes successfully. On Error: If the AI Agent node fails, its "On Error" output is triggered. This path loops back to the Agent Variables node, which increments the fail_count by 1. The process then repeats, causing the Fallback Models router to select the next model in the list. Final Failure: If all connected models are tried and fail, the workflow will stop with an error. Set up steps Setup time: ~3-5 minutes Configure Credentials: Ensure you have the necessary credentials (e.g., for OpenAI, Google AI) configured in your n8n instance. Define Your Model Chain: Add the AI model nodes you want to use to the canvas (e.g., OpenAI, Google Gemini, Anthropic). Connect them to the Fallback Models node. Important: The order in which you connect the models determines the fallback order. The model nodes first created/connected will be tried first. Set Your Prompt: Open the AI Agent node and enter the prompt you want to execute. Test: Run the workflow. To test the fallback logic, you can temporarily disable the First Model node or configure it with invalid credentials to force an error.
by Yang
What this workflow does This workflow automatically turns new technical video uploads into short, engaging Facebook post drafts—complete with a suggested image—and saves the results to Google Sheets for quick review or publishing. It’s designed to help you repurpose tutorial or demo videos into ready-to-use social content without any manual writing or design effort. What problem is this workflow solving? Manually writing Facebook posts for every new tutorial or product video takes time, especially when you want them to be engaging and consistent. This workflow solves that by using AI to watch for new videos, extract meaningful insights, and write posts and create visuals automatically—saving hours of work. Who is this for? This workflow is ideal for: Content creators uploading tutorial videos Marketing teams working with how-to or product videos Agencies and automation pros building scalable social workflows for clients How it works Trigger: Starts when a new video is uploaded to a specific Google Drive folder. Download & Convert: Downloads the video and converts it to base64. Extract Insights: Dumpling AI analyzes the video and extracts structured insights such as topic, tools mentioned, and key steps. Generate Post: GPT-4o creates a short, friendly Facebook post using those insights, along with an image prompt. Create Visual: Dumpling AI generates an image using the prompt. Save to Sheet: The Facebook post and image URL are saved to a Google Sheet. Setup Create a Google Sheet to store the posts and images. Connect your Google Drive, Google Sheets, Dumpling AI, and OpenAI credentials in n8n. Update the workflow with: Your Google Drive folder ID Your target Google Sheet ID (Optional) Edit the prompt used in the GPT node if you want a different tone, style, or structure for the post. How to customize the workflow Change the platform**: Replace “Facebook” in the prompt with LinkedIn, Instagram, or another platform. Use a different image tool**: You can swap Dumpling AI for any other image generation API (e.g. DALL·E, Midjourney via webhook). Add auto-publishing**: Add a Facebook or social media module to publish the generated post directly instead of just saving to Google Sheets. Tag videos by content type**: Use AI to classify videos into categories and store them in separate tabs or sheets.
by Oneclick AI Squad
This n8n template demonstrates how to create a comprehensive voice-powered restaurant assistant that handles table reservations, food orders, and restaurant information requests through natural language processing. The system uses VAPI for voice interaction and PostgreSQL for data management, making it perfect for restaurants looking to automate customer service with voice AI technology. Good to know Voice processing requires active VAPI subscription with per-minute billing Database operations are handled in real-time with immediate confirmations The system can handle multiple simultaneous voice requests All customer data is stored securely in PostgreSQL with proper indexing How it works Table Booking & Order Handling Workflow Voice requests are captured through VAPI triggers when customers make booking or ordering requests The system processes natural language commands and extracts relevant details (party size, time, food items) Customer data is immediately saved to the bookings and orders tables in PostgreSQL Voice confirmations are sent back through VAPI with booking details and estimated wait times All transactions are logged with timestamps for restaurant management tracking Restaurant Info Provider Workflow Info requests trigger when customers ask about hours, menu, location, or services Restaurant details are retrieved from the restaurant_info table containing current information Wait nodes ensure proper data loading before voice response generation Structured restaurant information is delivered via VAPI in natural, conversational format Database Schema Bookings Table booking_id (PRIMARY KEY) - Unique identifier for each reservation customer_name - Customer's full name phone_number - Contact number for confirmation party_size - Number of guests booking_date - Requested reservation date booking_time - Requested time slot special_requests - Dietary restrictions or special occasions status - Booking status (confirmed, pending, cancelled) created_at - Timestamp of booking creation Orders Table order_id (PRIMARY KEY) - Unique order identifier customer_name - Customer's name phone_number - Contact for order updates order_items - JSON array of food items and quantities total_amount - Calculated order total order_type - Delivery, pickup, or dine-in special_instructions - Cooking preferences or allergies status - Order status (received, preparing, ready, delivered) created_at - Order timestamp Restaurant_Info Table info_id (PRIMARY KEY) - Information entry identifier category - Type of info (hours, menu, location, contact) title - Information title description - Detailed information content is_active - Whether info is currently valid updated_at - Last modification timestamp How to use The manual trigger can be replaced with webhook triggers for integration with existing restaurant systems Import the workflow into your n8n instance and configure VAPI credentials Set up PostgreSQL database with the required tables using the schema provided above Configure restaurant information in the restaurant_info table Test voice commands such as "Book a table for 4 people at 7 PM" or "What are your opening hours?" Customize voice responses in VAPI nodes to match your restaurant's tone and branding The system can handle multiple concurrent voice requests and scales with your restaurant's needs Requirements VAPI account for voice processing and natural language understanding PostgreSQL database for storing booking, order, and restaurant information n8n instance with database and VAPI integrations enabled Customising this workflow Voice AI automation can be adapted for various restaurant types - from quick service to fine dining establishments Try popular use-cases such as multi-location booking management, dietary restriction handling, or integration with existing POS systems The workflow can be extended to include payment processing, SMS notifications, and third-party delivery platform integration
by Wyeth
Encode JSON to Base64 String in n8n This example workflow demonstrates how to convert a JSON object into a base64-encoded string using n8n’s built-in file processing capabilities. This is a common requirement when working with APIs, webhooks, or SaaS integrations that expect payloads to be base64-encoded. > Tip: The three green-highlighted nodes (Stringify → Convert to File → Extract from File) can be wrapped in a Subworkflow to create a reusable Base64 encoder in your own projects. 🔧 Requirements Any running n8n instance (local or cloud) No credentials or external services required What This Workflow Does Generates example JSON data Converts the JSON to a string Saves the string as a binary file Extracts the file’s contents as a base64 string Outputs the base64 string on the final node Step-by-Step Setup Manual Trigger Start the workflow using the Manual Execution node. This is useful for testing and development. Create JSON Data The Create Json Data node uses raw mode to construct a sample object with all major JSON types: strings, numbers, booleans, nulls, arrays, nested objects, etc. Convert to String The Convert to String node uses the expression ={{ JSON.stringify($json) }} to flatten the object into a single string field named json_text. Convert to File The Convert to File node takes the json_text value and saves it to a UTF-8 encoded binary file in the property encoded_text. Extract from File This node takes the binary file and extracts its contents as a base64-encoded string. The result is saved in the base64_text field. Customization Tips Replace the sample JSON in the Create Json Data node with your own payload structure. To make this reusable, extract the three core nodes into a Subworkflow or wrap them in a custom Function. Use the base64_text output field to post to APIs, store in databases, or include in webhook responses.
by Femi Ad
Google Sheets to MailChimp Auto-Importer Overview This n8n workflow automatically imports contacts from Google Sheets into your MailChimp mailing list. Perfect for businesses collecting leads through Google Forms, event registrations, or maintaining contact lists in spreadsheets. Key Features 📊 Bulk Import: Process entire Google Sheets at once 🔄 Smart Name Parsing: Automatically splits full names into first and last names 📱 Phone Number Support: Includes phone numbers as merge fields ⚡ Error Resilience: Continues processing even if individual contacts fail 📝 Import Summary: Generates a summary of processed contacts Prerequisites Before using this workflow, ensure you have: An active n8n instance (self-hosted or cloud) A Google account with access to Google Sheets A MailChimp account with at least one audience/list created Basic understanding of n8n workflows Initial Setup Step 1: Import the Workflow Copy the workflow JSON In n8n, click "Import from File" or paste the JSON Save the workflow with a meaningful name Step 2: Configure Google Sheets Connection Click on the "Get Google Sheet Data" node Click on "Credential to connect with" Select "Create New" and choose "Google Sheets OAuth2" Follow the OAuth flow to authenticate your Google account Save the credentials Step 3: Configure MailChimp Connection Click on the "Add to MailChimp" node Click on "Credential to connect with" Select "Create New" and choose "MailChimp OAuth2" or "MailChimp API" For API method: Log into MailChimp Go to Account → Extras → API keys Generate a new API key Copy and paste it into n8n Save the credentials Step 4: Configure Your Specific Settings Google Sheets Settings: Open the "Get Google Sheet Data" node Replace YOUR_GOOGLE_SHEET_ID with your actual sheet ID Find this in your Google Sheets URL: https://docs.google.com/spreadsheets/d/[SHEET_ID]/edit Replace YOUR_SHEET_NAME with your worksheet name (e.g., "Sheet1" or "Form Responses 1") MailChimp Settings: Open the "Add to MailChimp" node Replace YOUR_MAILCHIMP_LIST_ID with your audience ID Find this in MailChimp: Audience → Settings → Audience name and defaults Verify the status is set to "subscribed" Google Sheets Format Requirements Your Google Sheet must have the following columns (exact names): Names**: Full name of the contact (e.g., "John Doe") Email address**: Valid email address Phone Number**: Contact phone number (optional) Example: | Names | Email address | Phone Number | |-------|--------------|--------------| | John Doe | john@example.com | +1234567890 | | Jane Smith | jane@example.com | +0987654321 | How to Use Manual Execution: Open the workflow in n8n Click "Execute Workflow" Monitor the execution progress Check the output of "Create Import Summary" for results Scheduling (Optional): To run this automatically: Replace the "Manual Trigger" node with a "Schedule Trigger" node Set your desired schedule (e.g., daily at 9 AM) Activate the workflow Customization Options Adding More Fields: To include additional fields like company name or address: Add columns to your Google Sheet Modify the "Edit Fields" node to include new fields Update the "Format Subscriber Data" code to map new fields Add corresponding merge fields in the MailChimp node Handling Duplicates: The workflow uses "continueRegularOutput" error handling, which means: Existing subscribers will be skipped New subscribers will be added The workflow continues processing Adding Email Notifications: To receive import summaries via email: Add a Gmail or Email node after "Create Import Summary" Configure with your email settings Use the import summary data in the email body Troubleshooting Common Issues: "Invalid API Key" (MailChimp) Verify your API key is correct Check that your MailChimp account is active "Sheet not found" (Google Sheets) Verify the sheet ID is correct Ensure the service account has access to the sheet "Email already exists" errors This is normal for existing subscribers The workflow will continue processing other contacts Missing data in MailChimp Check that column names match exactly (case-sensitive) Verify data exists in the Google Sheet Best Practices Test First: Always test with a small dataset first Backup Data: Export your MailChimp list before large imports Clean Data: Ensure email addresses are valid before importing Monitor Regularly: Check import summaries for any issues Respect Privacy: Only import contacts who have consented to receive emails Support For issues specific to: n8n platform: Visit n8n Community Forum Google Sheets API: Check Google Developers Documentation MailChimp API: See MailChimp API Documentation Need help customizing? Contact me for consulting and support or add me on LinkedIn - https://www.linkedin.com/in/femi-adedayo-h44/ License This workflow template is provided free for personal and commercial use. Feel free to modify and share!
by Jonathan | NEX
Supercharge Your Security Operations for Free Stop wasting time manually investigating suspicious IP addresses. This workflow template is your launchpad to automating real-time IP cybersecurity analysis using the NixGuard platform, which you can use for free. This is the first of a two-part system designed to integrate seamlessly into your existing security stack, especially with Wazuh. It calls our main workflow, Automate IP Reputation Checks and Get AI Risk Summaries from NixGuard, to do the heavy lifting. What This Workflow Unlocks for You Free AI-Powered Risk Summaries:** Don't just get data; get answers. NixGuard provides a clear, human-readable summary of why an IP is considered risky. Automated IP Reputation Checks:** Programmatically check any IP against a vast array of threat intelligence sources. A Foundation for Your SOC Automation:** Use the results to trigger your incident response process. The template includes a pre-built example of how to send a detailed alert to Slack, which you can easily adapt for Jira, TheHive, or any other tool. How the Two-Workflow System Works This "Dispatcher" workflow is designed for flexibility. It holds your API key and input, then calls the main analysis workflow. This allows you to easily create multiple triggers (e.g., one for Slack bots, one for webhooks) without duplicating the core logic. Critical Setup Instructions Get the Main Workflow: First, add the main analysis engine to your n8n instance from the community page: NixGuard Analysis Workflow. Add Your Free API Key: In this workflow, click the blue Set API Key & Initial Prompt node. Paste your free NixGuard API key into the apiKey value field. Connect The Workflows: Click the purple Execute NixGuard & Wazuh Workflow node. In the parameters, use the dropdown to select the main analysis workflow you added in Step 1. Ready to automate your threat intelligence? Get your free API key and learn more at; 🔗 Learn more about NixGuard: [thenex.world](thenex.world )🔗 Get started with a free security subscription: thenex.world/security/subscribe Tags: Free, IP Analysis, NixGuard, Wazuh, Security, Automation, AI, Cybersecurity, Threat Intelligence, SOC, Incident Response, IP Reputation, DevSecOps, API
by bangank36
This workflow restores all n8n instance workflows from GitHub backups using the n8n API node. It complements the Backup Your Workflows to GitHub template by allowing users to seamlessly restore previously saved workflows. How It Works The workflow fetches workflows stored in a GitHub repository and imports them into your n8n instance. Setup Instructions To configure the workflow, update the Globals node with the following values: repo.owner** – Your GitHub username repo.name** – The name of your GitHub repository storing the workflows repo.path** – The folder path within the repository where workflows are stored For example, if your GitHub username is john-doe, your repository is named n8n-backups, and workflows are stored in a workflows/ folder, you would set: repo.owner → john-doe repo.name → n8n-backups repo.path → workflows/ Required Credentials GitHub API** – Access to your repository n8n API** – To import workflows into your n8n instance Who Is This For? This template is ideal for users who want to restore their workflows from GitHub backups, ensuring easy migration and recovery in case of data loss. Check out my other templates: 👉 My n8n Templates
by Yar Malik (Asfandyar)
Intro This template is for project managers, team leads, or anyone who wants to automatically remind teammates of tasks due today—no manual copy‑and‑paste required. How it works Schedule Trigger runs every morning at 8 AM. Google Sheets node reads your “Tasks” sheet. If node filters rows where Due Date = today. Summarize (ChatGPT HTTP Request) generates a friendly reminder per person. Message a model sends the prompt to your ChatGPT Assistant and returns the AI response. Send a message (Gmail) emails each assignee their personalized reminder. Required Google Sheet Structure | Column Name | Type | Example | Notes | |-------------|--------|---------------------------|-------------------------| | Name | string | Alice Johnson | Person to remind | | Email | string | user@example.com | Recipient email address | | Task | string | Submit quarterly report | Task description | | Due Date | date | 2025‑07‑29 | Format: YYYY‑MM‑DD | Detailed Setup Steps Google Sheets Create your sheet with the columns above. In n8n → Credentials, add Google Sheets API (do not include real sheet IDs in the name). ChatGPT Assistant In the OpenAI Dashboard → Assistants, click Create Assistant. Choose a model (e.g., gpt-4), copy the Assistant ID. In n8n → Credentials → OpenAI, add your API Key and Assistant ID. Gmail In n8n → Credentials → Gmail (OAuth2 or SMTP), connect your account without embedding your real address in the credential name. Import & Configure Export this workflow’s JSON (three‑dot menu → Export). Paste it under Template Code in the Creator form. In each node, select your Google Sheets, OpenAI, and Gmail credentials. Sticky Notes A note on the Schedule node: “Set your desired run time.” A note on the ChatGPT node: “Customizes reminder text.” A note on the Gmail node: “Sends reminder email.” Customization Guidance Change schedule: edit the Cron expression in **Schedule Trigger. Adjust tone**: modify the system prompt in your ChatGPT Assistant. Email format: update **Subject and Body in the Gmail node. Batch processing: insert a **SplitInBatches node before Summarize for large sheets. Troubleshooting Ensure your Google Sheet is shared with the connected service account. Verify Due Date format (YYYY‑MM‑DD). If ChatGPT fails, check your API key and quota. Security & Best Practices Do not** hard‑code API keys, sheet IDs, or real emails. Use n8n Credentials or environment variables only. Remove any private information before submitting.