by The O Suite
This n8n workflow automates website security audits. It combines direct website scanning, threat intelligence from AlienVault OTX, and advanced analysis from an OpenAI large language model (LLM) to generate and email a comprehensive security report. How it Works (Workflow Flow): Input: A user provides a website URL via a simple web form. Data Collection: An HTTP Request node visits the provided URL to gather initial data (status code, headers). An AlienVault HTTP Request node queries AlienVault OTX for known threats associated with the website's hostname. Data Preparation (Prepare Data for AI): A custom code node consolidates the collected website data and AlienVault intelligence, performing initial checks for common issues (e.g., error codes, missing security headers, AlienVault warnings). AI Analysis (Security Configuration Audit): The prepared data is sent to an OpenAI Chat Model, which acts as a cybersecurity expert. The AI analyzes the data to identify vulnerabilities, explain their impact, suggest exploitation methods, and outline mitigation steps. Report Formatting (Format Report for Email): Another custom code node takes the AI's plain-text report and converts it into a structured HTML format suitable for email. Delivery (Send Security Report): The final HTML report is sent via Gmail to a specified email address. Setup Steps: To use this workflow, you'll need an n8n instance and the following credentials: n8n Instance: Ensure your n8n environment is running. OpenAI API Key: Generate a key from OpenAI. Add an "OpenAI API" credential in n8n (e.g., "OpenAI account"). AlienVault OTX API Key: Obtain a key from your AlienVault OTX profile. Add an "AlienVault OTX API" credential in n8n (e.g., "AlienVault account"). Gmail Account: Set up a "Gmail OAuth2" credential in n8n for sending emails (recommended for security; involves Google Cloud setup). Import Workflow: Copy the workflow's JSON code. In n8n, import the workflow via "Workflows" > "New" > "Import from JSON". Configure Recipient: In the "Send Security Report" node, specify the email address where reports should be sent. Activate: Enable the workflow to start processing submissions. Once activated, access the "On form submission" webhook URL to input a URL and trigger an audit.
by James Francis
Overview Slack quietly released an update to their API that allows developers to build "AI Apps & Agents", which is a special classification of apps that have access to several special capabilities including: Multiple simultaneous chat threads with one user Loading "three dots" UI while your agent is thinking Option for users to pin your app to their top bar for quick chat access This workflow demonstrates how to build a Slack agent that takes advantage of all of these features. For a full video walkthrough of this workflow, watch this YouTube tutorial. Setup Instructions All of the below steps are required for this workflow to function properly unless otherwise noted. Create a Slack App Visit api.slack.com and click "Your Apps" Create a new app from scratch and follow the setup instructions In the Agents & AI Apps tab, enable the toggle and give your app a brief description In the OAuth & Permissions tab, enable the following bot token scopes: assistant:write chat:write channels:read im:history Install the app into your workspace and grant the requested permissions In your Slack workspace, right click your app's name in the sidebar, click "View app details", and make note of your apps Channel ID - you'll need this later. Copy your app's Bot User OAuth Token - you'll need that to create your n8n credentials In the Event Subscriptions tab, enable events and paste the workflows PRODUCTION webhook url (from this workflow's trigger node) into the input. In the same tab under "Susbcribe to bot events", select message.im Create a Postgres database In order to save the chat history and give your agent a working memory, you'll need your own Postgres database. You can use Supabase, Neon, or any other Postgres database provider. Once you've added your database's credentials to n8n, you can select those credentials in the Postgres Chat Memory node. This worklow saves all chat history in a table called chat_histories, but you name the table whatever you want. Create n8n Credentials You'll need to create the following credentials: Slack API. Use your Bot User OAuth Token referenced above. Bearer Auth. Use the same Bot User OAuth Token. Postgres. Use the connection string or config from your database provider. OpenRouter (or any other LLM model for the agent's model node) Wire Everything Up Now that you've created your Slack app, have your Postgres database, and have created credentials, follow these steps to wire up your workflow: In the "On Message Received" trigger, use your Slack API credential and enter your apps Channel ID in the "Channel To Watch" field. In the "Set Thinking Status" node, use your Bearer Auth credential. In the "Postgres Chat Memory" node, use your Postgres credential. In the "Send Reply" node, use your Slack API credential. Using the Chatbot Once you've completed the setup process and added in your credentials, you'll have a fully functional Slack chatbot complete with threads, loading UI, and the ability to pin your app to your workspace's top bar. Taking the Next Steps Now that this skeleton app is in place, it's up to you to add horsepower to the AI agent at the center of it all. Customize the prompts and add whatever tools you'd like. The sky is the limit! If you have any questions or feedback about this workflow, or would like me to build custom workflows for your business, email me at n8n@paperjam.agency.
by Yang
Who is this for? This workflow is for digital marketers, small business owners, lead generation agencies, and VAs who need a scalable way to find and store local business leads using AI. It’s especially useful for teams that want to enrich leads with real-time news insights and save the structured data to Airtable. What problem is this workflow solving? Manually researching local businesses and staying up to date with relevant news is time-consuming and inefficient. This automation eliminates that burden by using Dumpling AI chat agents to generate leads and context, GPT-4o to summarize, and Airtable to store everything in one place. What this workflow does This AI workflow listens for a manual trigger in n8n and executes the following steps: Extracts local business leads using a Local Business Agent from Dumpling AI. Pulls current news related to the business type or location using a News Agent from Dumpling AI. Uses GPT-4o to combine both responses into a human-readable summary. Extracts structured lead data like name, category, and city. Saves the summary and lead data into Airtable for easy follow-up. Setup 1. Create AI Agents in Dumpling AI Sign in at Dumpling AI Create two separate agents: Local Business Agent: Designed to respond with structured lists of businesses by location and category. News Agent: Designed to fetch relevant recent news and summaries about a specific industry or region. After setting up each agent, copy the Agent Key from Dumpling AI. These keys will be required in the headers of your HTTP Request nodes in n8n. 2. Manual Trigger This workflow begins with a manual trigger inside n8n, Which is the When chat message is recieved. This makes it easy to test and reuse, especially during setup. 3. Get Local Business Data from Dumpling AI The first HTTP Request node sends a prompt like List 5 top real estate companies in Atlanta with full address and services. Include your Local Business Agent Key in the x-agent-key header. The response will return a structured list of business leads. 4. Get News Context from Dumpling AI The second HTTP Request node sends a prompt such as Give me the latest news related to the real estate market in Atlanta. Use your News Agent Key in the header. This fetches a brief set of recent news summaries relevant to the businesses being researched. 5. Use GPT-4o to Merge and Summarize The GPT node combines the list of businesses and news into one coherent summary. You can modify the prompt to output in paragraph format, bullet points, or structured notes. 6. Save Lead to Airtable The Airtable node sends all structured fields into your selected base and table. Be sure to connect your Airtable account and confirm the columns match exactly. How to customize this workflow Replace the prompt inside the HTTP node to focus on different types of businesses or cities. Expand the GPT output to include additional lead info like websites, phone numbers, or emails if the agent includes them. Add a webhook trigger to allow this flow to be run via a chatbot, external app, or button. Link to HubSpot or another CRM to sync the leads automatically. Duplicate the process to run for multiple industries in parallel. Final Notes You must create and configure your Dumpling AI agents first before running this workflow. The Agent Keys from Dumpling AI are required in both HTTP Request nodes. This flow is modular and flexible, ready for deeper CRM integrations. The manual trigger is great for testing, but you can add a Webhook node to automate it. This workflow helps you launch an intelligent lead gen process that combines location-targeted business discovery, AI-generated insights, and structured CRM-friendly output, all powered by Dumpling AI and OpenAI.
by NovaNode
Who is this for? This template is designed for internal support teams, product specialists, and knowledge managers in technology companies who want to automate ingestion of product documentation and enable AI-driven, retrieval-augmented question answering. What problem is this workflow solving? Support agents often spend too much time manually searching through lengthy documentation, leading to inconsistent or delayed answers. This solution automates importing, chunking, and indexing product manuals, then uses retrieval-augmented generation (RAG) to answer user queries accurately and quickly with AI. What these workflows do Workflow 1: Document Ingestion & Indexing Manually triggered to import product documentation from Google Docs. Automatically splits large documents into chunks for efficient searching. Generates vector embeddings for each chunk using OpenAI embeddings. Inserts the embedded chunks and metadata into a MongoDB Atlas vector store, enabling fast semantic search. Workflow 2: AI-Powered Query & Response Listens for incoming user questions (can be extended to webhook). Converts questions to vector embeddings and performs similarity search on MongoDB vector store. Uses OpenAI’s GPT-4o-mini model with retrieval-augmented generation to produce direct, context-aware answers. Maintains short-term conversation context using a memory buffer node. Setup Setting up vector embeddings Authenticate Google Docs and connect your Google Docs URL containing the product documentation you want to index. Authenticate MongoDB Atlas and connect the collection where you want to store the vector embeddings. Create a search index on this collection to support vector similarity queries. Ensure the index name matches the one configured in n8n (data_index). See the example MongoDB search index template below for reference. Setting up chat Configure the AI system prompt in the “Knowledge Base Agent” node to reflect your company’s tone, answering style, and any business rules. Update the workflow description and instructions to help users understand the chat’s purpose and capabilities. Connect the MongoDB collection used for vector search in the chat workflow and update the vector search index if needed to match your setup. Make sure Both MongoDB nodes (in ingestion and chat workflows) are connected to the same collection, with: An embedding field storing vector data, Relevant metadata fields (e.g., document ID, source), and The same vector index name configured (e.g., data_index). Search Index Example: { "mappings": { "dynamic": false, "fields": { "_id": { "type": "string" }, "text": { "type": "string" }, "embedding": { "type": "knnVector", "dimensions": 1536, "similarity": "cosine" }, "source": { "type": "string" }, "doc_id": { "type": "string" } } } }
by Yaron Been
Workflow Overview This cutting-edge n8n automation is a sophisticated market research and intelligence gathering tool designed to transform web content discovery into actionable insights. By intelligently combining web crawling, AI-powered filtering, and smart summarization, this workflow: Discovers Relevant Content: Automatically crawls target websites Identifies trending topics Extracts comprehensive article details Intelligent Content Filtering: Applies custom keyword matching Filters for most relevant articles Ensures high-quality information capture AI-Powered Summarization: Generates concise, meaningful summaries Extracts key insights Provides quick, digestible information Seamless Delivery: Sends summaries directly to Slack Enables instant team communication Facilitates rapid information sharing Key Benefits 🤖 Full Automation: Continuous market intelligence 💡 Smart Filtering: Precision content discovery 📊 AI-Powered Insights: Intelligent summarization 🚀 Instant Delivery: Real-time team updates Workflow Architecture 🔹 Stage 1: Content Discovery Scheduled Trigger**: Daily market research FireCrawl Integration**: Web content crawling Comprehensive Site Scanning**: Extracts article metadata Captures full article content Identifies key information sources 🔹 Stage 2: Intelligent Filtering Keyword-Based Matching** Relevance Assessment** Custom Domain Optimization**: AI and technology focus Startup and innovation tracking 🔹 Stage 3: AI Summarization OpenAI GPT Integration** Contextual Understanding** Concise Insight Generation**: 3-point summary format Captures essential information 🔹 Stage 4: Team Notification Slack Integration** Instant Information Sharing** Formatted Insight Delivery** Potential Use Cases Market Research Teams**: Trend tracking Innovation Departments**: Technology monitoring Startup Ecosystems**: Competitive intelligence Product Management**: Industry insights Strategic Planning**: Rapid information gathering Setup Requirements FireCrawl API Web crawling credentials Configured crawling parameters OpenAI API GPT model access Summarization configuration API key management Slack Workspace Channel for insights delivery Appropriate app permissions Webhook configuration n8n Installation Cloud or self-hosted instance Workflow configuration API credential management Future Enhancement Suggestions 🤖 Multi-source crawling 📊 Advanced sentiment analysis 🔔 Customizable alert mechanisms 🌐 Expanded topic tracking 🧠 Machine learning refinement Technical Considerations Implement robust error handling Use exponential backoff for API calls Maintain flexible crawling strategies Ensure compliance with website terms of service Ethical Guidelines Respect content creator rights Use data for legitimate research Maintain transparent information gathering Provide proper attribution Workflow Visualization [Daily Trigger] ⬇️ [Web Crawling] ⬇️ [Content Filtering] ⬇️ [AI Summarization] ⬇️ [Slack Delivery] Connect With Me Ready to revolutionize your market research? 📧 Email: Yaron@nofluff.online 🎥 YouTube: @YaronBeen 💼 LinkedIn: Yaron Been Transform your information gathering with intelligent, automated workflows! #AIResearch #MarketIntelligence #AutomatedInsights #TechTrends #WebCrawling #AIMarketing #InnovationTracking #BusinessIntelligence #DataAutomation #TechNews
by ist00dent
This n8n template empowers you to instantly fetch a list of public holidays for any given year and country using the Nager.Date API. This is incredibly useful for scheduling, planning, or integrating holiday data into various business and personal automation workflows. 🔧 How it works Receive Holiday Request Webhook: This node acts as the entry point, listening for incoming POST requests. It expects a JSON body containing the year (e.g., 2025) and countryCode (e.g., US for United States, PH for Philippines, DE for Germany) for which you want to retrieve public holidays. Get Public Holidays: This node makes an HTTP GET request to the Nager.Date API (date.nager.at). It dynamically uses the year and countryCode from your webhook request to query the API. The API responds with a JSON array, where each object represents a public holiday with details like its date, name, and type. Respond with Holiday Data: This node sends the full list of public holidays received from Nager.Date back to the service that initiated the webhook. 👤 Who is it for? This workflow is ideal for: Businesses with International Operations: Automatically check holidays for different country branches to adjust production schedules, customer service hours, or delivery estimates. HR & Payroll Departments: Accurately calculate workdays, plan leave schedules, or process payroll taking public holidays into account. Event Planners: Avoid scheduling events on public holidays, which could impact attendance or venue availability. Travel Agencies: Inform clients about holidays in their destination country that might affect local business hours or attractions. Content & Social Media Schedulers: Plan content around national holidays to maximize engagement or avoid insensitive postings. Personal Productivity & Travel Planning: Integrate holiday data into your calendar or task management tools to plan trips or personal time off more effectively. Developers: Easily integrate a reliable source of public holiday data into custom applications, dashboards, or internal tools without managing complex datasets. 📑 Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "year": 2025, "countryCode": "PH" // Example: "US", "DE", "GB", etc. } You can find a comprehensive list of supported country codes on the Nager.Date API documentation: https://www.nager.at/Country The workflow will return a JSON array, where each element is a holiday object, like this example for a single holiday: [ { "date": "2025-01-01", "localName": "New Year's Day", "name": "New Year's Day", "countryCode": "PH", "fixed": true, "global": true, "counties": null, "launchYear": null, "types": [ "Public" ] } // ... more holiday objects ] ⚙️ Setup Instructions Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Receive Holiday Request Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /public-holidays). Activate Workflow: Save and activate the workflow. 📝 Tips This workflow is a foundation for many powerful automations: Conditional Branching for Specific Holidays: Add an IF node after "Get Public Holidays" to check for a specific holiday (e.g., "Christmas Day"). You can then trigger different actions (e.g., send a reminder, adjust a schedule) only for that particular holiday. Filtering and Aggregating Data: Use a Filter node to only keep holidays of a certain type (e.g., "Public"). Use a Code or Function node to count the number of public holidays, or extract just the names and dates into a simpler list. Storing Holiday Data: Google Sheets/Airtable: Automatically append new holidays to a spreadsheet for easy reference or further analysis. Database: Store holiday data in a database (like PostgreSQL or MySQL) to build a custom holiday calendar application. Scheduling and Reminders: Connect this workflow to a Cron or Schedule node to run periodically (e.g., once a year at the start of the year). Use the retrieved holiday dates to set up reminders in your calendar (Google Calendar node) or send notifications (Slack, Email, SMS) a few days before an upcoming holiday. Integrate with Business Logic: Employee Leave Management: Cross-reference employee leave requests with public holidays to ensure accuracy. Automated Messages: Schedule automated "Happy Holiday" messages to customers or employees. E-commerce Shipping: Adjust estimated shipping times based on upcoming non-working days. API Key (Not needed for Nager.Date free tier): The Nager.Date API used here does not require an API key for basic public holiday lookups, which makes this template very easy to use out-of-the-box.
by Fan Luo
Auto-Share YouTube Videos with AI-Generated Posts to Facebook, X and Notify in Discord This n8n template demonstrates how to use a LLM like DeepSeek to generate a post and share to Facebook page and X automatically whenever a new video is published to a YouTube channel. How it works We first define RSS with a polling schedule to pull YouTube videos from a specified channel Prompt AI agent to generate a post with proper url and hash tags based on the video metadata Then automatically create a new post in Facebook and X via their APIs Post a new message in Discord channel via Webhook How to use Simply setup a RSS polling trigger to automatically trigger the workflow Requirements Facebook API setup, see step by step tutorials X v2 API setup, see step by step tutorials Discord channel webhook, see step by step tutorials Need Help? Contact me via My Blog or ask in the Forum! Happy Hacking!
by n8n Team
This workflow creates a GitHub issue when a new ticket is created in Zendesk. Subsequent comments on the ticket in Zendesk are added as comments to the issue in GitHub. Prerequisites Zendesk account and Zendesk credentials. GitHub account and GitHub credentials. GitHub repository to create issues in. How it works The workflow listens for new tickets in Zendesk. When a new ticket is created, the workflow creates a new issue in GitHub. The GitHub issue number is then saved in one of the ticket's fields (in setup we call this "GitHub Issue Number"). The next time a comment is added to the ticket, the workflow retrieves the GitHub issue number from the ticket's field and adds the comment to the issue in GitHub. Setup This workflow requires that you set up a webhook in Zendesk. To do so, follow the steps below: In the workflow, open the On new Zendesk ticket node and copy the webhook URL. In Zendesk, navigate to Admin Center > Apps and integrations > Webhooks > Actions > Create Webhook. Add all the required details which can be retrieved from the On new Zendesk ticket node. The webhook URL gets added to the “Endpoint URL” field, and the “Request method” should match what is shown in n8n. Save the webhook. In Zendesk, navigate to Admin Center > Objects and rules > Business rules > Triggers > Add trigger. Give trigger a name such as “New tickets”. Under “Conditions” in “Meet ALL of the following conditions”, add “Status is New”. Under “Actions”, select “Notify active webhook” and select the webhook you created previously. In the JSON body, add the following: { "id": "{{ticket.id}}", "comment": "{{ticket.latest_comment_html}}" } Save the Zendesk trigger. You will also need to set up a field in Zendesk to store the GitHub issue number. To do so, follow the steps below: In Zendesk, navigate to Admin Center > Objects and rules > Tickets > Fields > Add field. Use the number field option and give the field a name such as “GitHub Issue Number”. Save the field. In n8n, open the Update ticket node and select the field you created in Zendesk.
by Oneclick AI Squad
An AI-powered email marketing automation workflow that generates personalized marketing emails using data from Google Sheets and delivers them directly to clients. This workflow combines the power of AI content generation with spreadsheet-based campaign management for seamless email marketing automation. What's the Goal? Automatically pull marketing offer details from Google Sheets (Sheet 1) Fetch client information from Google Sheets (Sheet 2) Use AI to generate compelling, personalized marketing content Format emails with professional structure and personalization Send targeted marketing emails directly to clients Enable scalable email marketing campaigns with minimal manual effort By the end, you'll have a fully automated email marketing system that creates and sends personalized campaigns based on your spreadsheet data. Why Does It Matter? Manual email marketing is labor-intensive and lacks personalization at scale. Here's why this workflow is a game changer: Zero Manual Drafting**: AI generates unique content for each recipient Data-Driven Personalization**: Leverages spreadsheet data for targeted messaging Scalable Campaigns**: Handle hundreds of clients with a single workflow execution Consistent Quality**: AI ensures professional, engaging content every time Time Efficiency**: Transform hours of work into minutes of automation Cost-Effective**: Reduce marketing team workload while increasing output Think of it as your intelligent marketing assistant that creates personalized campaigns at enterprise scale. How It Works Here's the step-by-step process behind the automation: Step 1: Track Offer Updates Node**: Track Offer Sheet Updates (Sheet 1) Function**: Monitor Google Sheets for new marketing offers or updates Trigger**: Automatically activates when new data is added to Sheet 1 Step 2: Generate Marketing Content Node**: Generate Marketing Content with AI Function**: Process offer details through AI model (Llama 3.2) Process**: Creates compelling marketing copy based on offer parameters Step 3: Fetch Client Information Node**: Fetch Client List (Sheet 2) Function**: Retrieve client names and email addresses from Sheet 2 Data**: Pulls client_name and client_email for personalization Step 4: Content Personalization Node**: Format Personalized Email Function**: Combine AI-generated content with client-specific data Output**: Creates personalized email for each recipient Step 5: Email Delivery Node**: Send Marketing Email to Client Function**: Deliver personalized emails directly to client inboxes Method**: Uses Gmail integration for professional delivery Google Sheets Structure Sheet 1: Marketing Offer Details | Column | Description | Example | |--------|-------------|---------| | title | Campaign/offer name | "Summer Sale 2024" | | discount | Discount percentage or amount | "25% OFF" | | validity | Offer expiration date | "Valid until July 31st" | | products_included | Items covered by offer | "All summer collection" | | original_price | Pre-discount pricing | "$199.99" | | discounted_price | Final pricing | "$149.99" | | cta | Call-to-action text | "Shop Now" | | bonus | Additional incentives | "Free shipping included" | Sheet 2: Client Information | Column | Description | Example | |--------|-------------|---------| | client_name | Customer's full name | "John Smith" | | client_email | Customer's email address | "john.smith@email.com" | How to Use the Workflow Prerequisites Google Sheets Setup: Create two sheets with the required column structure n8n Account: Access to n8n workflow platform Gmail API: Gmail account with API access configured AI Model Access: Llama 3.2 API credentials Importing the Workflow in n8n Step 1: Obtain the Workflow JSON Download the workflow file or copy the JSON code Ensure you have the complete workflow configuration Step 2: Access n8n Workflow Editor Log in to your n8n instance (Cloud or self-hosted) Navigate to the Workflows section Click "Add Workflow" to create a new workflow Step 3: Import the Workflow Option A: Import from Clipboard Click the three dots (⋯) in the top-right corner Select "Import from Clipboard" Paste the JSON code into the text box Click "Import" to load the workflow Option B: Import from File Click the three dots (⋯) in the top-right corner Select "Import from File" Choose the .json file from your computer Click "Open" to import the workflow Configuration Setup Google Sheets Integration Authenticate Google Sheets: Connect your Google account in n8n Configure Sheet 1: Set spreadsheet ID and range for marketing offers Configure Sheet 2: Set spreadsheet ID and range for client information AI Model Configuration Set API Credentials: Configure Llama 3.2 API key and endpoint Customize Prompts: Adjust AI prompts for your brand voice and style Set Content Parameters: Define content length, tone, and structure Gmail Integration Gmail API Setup: Enable Gmail API in Google Cloud Console OAuth Configuration: Set up OAuth credentials for email sending Sender Configuration: Configure sender name and email address Content Customization Email Templates: Customize email structure and branding Personalization Fields: Map spreadsheet columns to email variables Brand Guidelines: Set company colors, fonts, and messaging tone Workflow Execution Manual Execution Click "Execute Workflow" in the n8n interface Monitor execution progress through each node Review generated content and delivery status Automated Execution Set up triggers based on sheet updates Configure scheduling for regular campaign runs Enable webhook triggers for real-time processing Best Practices Data Management Keep spreadsheet data clean and formatted consistently Regular validation of email addresses in Sheet 2 Update offer details promptly in Sheet 1 Content Quality Review AI-generated content periodically Adjust prompts based on campaign performance Maintain consistent brand voice across campaigns Deliverability Monitor email bounce rates and engagement metrics Maintain clean email lists with valid addresses Follow email marketing best practices and regulations Performance Optimization Batch process large client lists for efficiency Monitor workflow execution times Implement error handling and retry mechanisms Troubleshooting Common Issues Authentication Errors**: Verify API credentials and permissions Sheet Access**: Ensure proper sharing permissions for Google Sheets Email Delivery**: Check Gmail API quotas and sending limits AI Processing**: Monitor API rate limits and response times Error Handling Implement retry logic for failed operations Set up notification systems for workflow failures Maintain backup data sources for critical campaigns Security Considerations Use environment variables for API keys and credentials Implement proper access controls for sensitive data Regular security audits of connected services Compliance with data protection regulations (GDPR, CAN-SPAM) Conclusion This Smart Email Marketing Generator transforms your marketing campaigns from manual, time-consuming tasks into automated, intelligent processes. By leveraging AI and spreadsheet data, you can create personalized, engaging campaigns that scale with your business needs while maintaining professional quality and consistency. The workflow represents a significant advancement in marketing automation, combining the accessibility of spreadsheet-based data management with the power of AI-driven content generation and automated delivery systems.
by Mauricio Perera
n8n Workflow: Calculate the Centroid of a Set of Vectors Overview This workflow receives an array of vectors in JSON format, validates that all vectors have the same dimensions, and computes the centroid. It is designed to be reusable across different projects. Workflow Structure Nodes and Their Functions: Receive Vectors (Webhook): Accepts a GET request containing an array of vectors in the vectors parameter. Expected Input: vectors parameter in JSON format. Example Request: /webhook/centroid?vectors=[[2,3,4],[4,5,6],[6,7,8]] Output: Passes the received data to the next node. Extract & Parse Vectors (Set Node): Converts the input string into a proper JSON array for processing. Ensures vectors is a valid array. If the parameter is missing, it may generate an error. Expected Output Example: { "vectors": [[2,3,4],[4,5,6],[6,7,8]] } Validate & Compute Centroid (Code Node): Validates vector dimensions and calculates the centroid. Validation: Ensures all vectors have the same number of dimensions. Computation: Averages each dimension to determine the centroid. If validation fails: Returns an error message indicating inconsistent dimensions. Successful Output Example: { "centroid": [4,5,6] } Error Output Example: { "error": "Vectors have inconsistent dimensions." } Return Centroid Response (Respond to Webhook Node): Sends the final response back to the client. If the computation is successful, it returns the centroid. If an error occurs, it returns a descriptive error message. Example Response: { "centroid": [4, 5, 6] } Inputs JSON array of vectors, where each vector is an array of numerical values. Example Input { "vectors": [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ] } Setup Guide Create a new workflow in n8n. Add a Webhook node (Receive Vectors) to receive JSON input. Add a Set node (Extract & Parse Vectors) to extract and convert the data. Add a Code node (Validate & Compute Centroid) to: Validate dimensions. Compute the centroid. Add a Respond to Webhook node (Return Centroid Response) to return the result. Function Node Script Example const input = items[0].json; const vectors = input.vectors; if (!Array.isArray(vectors) || vectors.length === 0) { return [{ json: { error: "Invalid input: Expected an array of vectors." } }]; } const dimension = vectors[0].length; if (!vectors.every(v => v.length === dimension)) { return [{ json: { error: "Vectors have inconsistent dimensions." } }]; } const centroid = new Array(dimension).fill(0); vectors.forEach(vector => { vector.forEach((val, index) => { centroid[index] += val; }); }); for (let i = 0; i < dimension; i++) { centroid[i] /= vectors.length; } return [{ json: { centroid } }]; Testing Use a tool like Postman or the n8n UI to send sample inputs and verify the responses. Modify the input vectors to test different scenarios. This workflow provides a simple yet flexible solution for vector centroid computation, ensuring validation and reliability.
by Anurag
Description This workflow automates the extraction of structured data from invoices or similar documents using Docsumo's API. Users can upload a PDF via an n8n form trigger, which is then sent to Docsumo for processing and structured parsing. The workflow fetches key document metadata and all line items, reconstructs each invoice row with combined header and item details, and finally exports all results as an Excel file. Ideal for automating invoice data entry, reporting, or integrating with accounting systems. How It Works A user uploads a PDF document using the integrated n8n form trigger. The workflow securely sends the document to Docsumo via REST API. After uploading, it checks and retrieves the parsed document results. Header information and table line items are extracted and mapped into structured records. The complete result is exported as an Excel (.xls) file. Setup Steps Docsumo Account: Register and obtain your API key from Docsumo. n8n Credentials Manager: Add your Docsumo API key as an HTTP header credential (never hardcode the key in the workflow). Workflow Configuration: In the HTTP Request nodes, set the authentication to your saved Docsumo credentials. Update the file type or document type in the request (e.g., "type": "invoice") as needed for your use case. Testing: Enable the workflow and use the built-in form to upload a sample invoice for extraction. Features Supports PDF uploads via n8n’s built-in form or via API/webhook extension. Sends files directly to Docsumo for document data extraction using secure credentials. Extracts invoice-level metadata (number, date, vendor, totals) and full line item tables. Consolidates all data in easy-to-use Excel format for download or integration. Modular node structure, easily extensible for further automation. Prerequisites Docsumo account with API access enabled. n8n instance with form, HTTP Request, Code, and Excel/Convert to File nodes. Working Docsumo API Key stored securely in n8n’s credential manager. Example Use Cases | Scenario | Benefit | |---------------------|-----------------------------------------| | Invoice Automation | Extract line items and metadata rapidly | | Receipts Processing | Parse and digitize business receipts | | Bulk Bill Imports | Batch process bills for analytics | Notes Credentials Security:** Do not store your API key directly in HTTP Request nodes; always use n8n credentials manager. Sticky Notes:** The workflow includes sticky notes for setup, input, API call, extraction, and output steps to assist template users. Custom Columns:** You can customize header or line item extraction by editing the Code node as needed.
by Jacob @ vwork Digital
This n8n template allows you to send emails with a custom alias from your Gmail account Since the native Gmail node has some limitations regarding use of email aliases, this template allows you to set up your own internal endpoint/sub-workflow to send emails as an email alias . How it works This workflow uses a Code node and the Gmail API via an HTTP node to format the email content and send using an alias on your Gmail account. Setup instructions You must have added the email address as an alias you wish to send as in your Gmail account, guide on how to do so here. You must have created a Gmail credential in N8N, guide on how to do so here. Use your Gmail OAuth Credential in the HTTP node. Use this template as an API endpoint or a sub-workflow, and send this payload to it via POST: { "senderName": "SENDER NAME HERE", "fromEmail": "FROM EMAIL HERE", "replyTo": "REPLY TO EMAIL HERE", "toEmail": "jacob@vwork.digital", "subject": "SUBJECT LINE HERE", "htmlBody": "HTML BODY HERE - MUST BE JSON STRINGIFIED", "file_urls": [ "FILE URLS FOR ATTACHMENTS HERE" ] } Notes Only the following are required fields: fromEmail toEmail subject htmlBody Customizing this workflow You can easily convert this to a sub-workflow by swapping out the Webhook trigger for a "When executed by another workflow" trigger