by Lucas Peyrin
How it works This workflow demonstrates a fundamental pattern for securing a webhook by requiring an API key. It acts as a gatekeeper, checking for a valid key in the request header before allowing the request to proceed. Incoming Request: The Secured Webhook node receives an incoming POST request. It expects an API key to be sent in the x-api-key header. API Key Verification: The Check API Key node takes the key from the incoming request's header. It then makes an internal HTTP request to a second webhook (Get API Key) which acts as a mock database. This second webhook retrieves a list of registered API keys (from the Registered API Keys node) and filters it to find a match for the key that was provided. Conditional Response: If a match is found, the API Key Identified node routes the execution to the "success" path, returning a 200 OK response with the identified user's ID. If no match is found, it routes to the "unauthorized" path, returning a 401 Unauthorized error. This pattern separates the public-facing endpoint from the data source, which is a good security practice. Set up steps Setup time: ~2 minutes This workflow is designed to be a self-contained example. Set up Credentials: This workflow uses "Header Auth" for its internal communication. Go to Credentials and create a new Header Auth credential. You can use any name and value (e.g., Name: X-N8N-Auth, Value: my-secret-password). Select this credential in all four webhook/HTTP Request nodes. Add Your API Keys: Open the Registered API Keys node. This is your mock database. Edit the array to include the user_id and api_key pairs you want to authorize. Activate the workflow. Test it: Use the Test Secure Webhook node to send a request. Try it with a valid key from your list to see the success response. Change the x-api-key header to an invalid key to see the 401 Unauthorized error. For Production: Replace the mock database part of this workflow (the Get API Key webhook and Registered API Keys node) with a real database node like Supabase, Postgres, or Baserow to look up keys.
by Khaisa Studio
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. ❓ What Problem Does It Solve? Manual transcription and action planning from meeting notes is often error-prone, time-consuming, and inconsistent. Important tasks, decisions, or deadlines can be overlooked or delayed. This workflow solves these pain points by automatically analyzing notes using AI and turning them into actionable, structured data. It drastically reduces follow-up delays, miscommunications, and administrative effort, letting teams focus on execution instead. 💡 Why Use Google Meet Automation? Save Hours of Manual Work:** Automatically transform raw meeting notes into structured tasks and emails without lifting a finger. Ensure Accurate Follow-up:** Never miss important action items or decisions buried in text; everything is extracted and assigned clearly. Improve Team Collaboration:** Instantly distribute meeting summaries and next steps to attendees, keeping everyone aligned. Leverage Advanced AI:** Utilize Google Gemini’s powerful natural language processing tailored specifically for meetings. Fully End-to-End Automated:** From receiving notes to task creation and email dispatch — your post-meeting workflow is completely hands-free. ⚡ Who Is This For? Project Managers:** Streamline task delegation and keep project timelines on track. Team Leads:** Quickly communicate key takeaways and follow-ups to team members. Sales and Account Teams:** Document client meetings efficiently and automate follow-up outreach. Remote Teams:** Ensure clarity and continuity after virtual meetings. Executives:** Get concise summaries and important decision logs automatically. 🔧 What This Workflow Does ⏱ Trigger: Activated via a POST webhook receiving meeting notes, title, attendees, date, and duration. 📎 Step 2: Validates inputs; if missing required fields, sends an error response. 🔍 Step 3: Extracts and formats meeting data into structured variables for processing. 🤖 Step 4: Sends meeting notes to Google Gemini AI for advanced analysis to identify action items, decisions, summaries, follow-ups, and dates. 💌 Step 5: Splits AI responses to create Google Tasks from action items and send personalized follow-up emails via Gmail. 🗂 Step 6: Generates a Google Docs meeting summary document and finally returns a success response with all processed results. 🔐 Setup Instructions Import the provided Google Meet Automation.json file into your n8n instance. use Payload example Set up credentials for: Google OAuth2 API (Google Tasks, Google Docs) Gmail OAuth2 API for sending emails Google Palm API (for Google Gemini AI access) Customize workflow parameters: Webhook URL and access permissions Google Tasks project or folders if applicable Email templates if desired (subject line, branding) Update any API endpoints or credential references to match your account setup. Thoroughly test with sample meeting note payloads to ensure smooth execution. 🧩 Pre-Requirements Active n8n instance (Cloud or Self-hosted) Google Cloud Platform project with: Google Tasks API enabled Google Docs API enabled Gmail API enabled Google Palm API access (Google Gemini AI) Valid OAuth2 credentials configured in n8n for above services API quota and permissions for sending emails, creating docs, and tasks 🛠️ Customize It Further Integrate with calendar apps (Google Calendar, Outlook) to auto-schedule next meetings. Add Slack or Microsoft Teams notifications for real-time alerts. Extend AI prompt for deeper insights like sentiment analysis or risk flags. Customize email templates with branding, signatures, or attachments. Connect task outputs with project management tools like Asana, Trello, or Jira. 📞 Support Made by: khaisa Studio Tag: automation, google meet, meeting notes, AI, google tasks, gmail, google docs Category: Productivity Need a custom? Contact Us
by Robert Breen
n8n Workflow: OpenAI DALL·E 2 Image Generation & Google Drive Upload Description This n8n workflow automates the process of generating multiple AI-created images from a single prompt using OpenAI's DALL·E 2, then uploads the results directly to a Google Drive folder. It includes a loop to produce several image variations for the same prompt, making it ideal for creative projects, marketing materials, or content experimentation. Step-by-Step Setup Instructions 1. Prepare Your API Keys OpenAI API Key** Sign up or log in at https://platform.openai.com/ Go to API Keys and create a new one. Copy and store this securely — you'll need it in n8n. Google Drive API** Go to https://console.cloud.google.com/ Create a project and enable Google Drive API. Create OAuth 2.0 credentials and set the redirect URI to your n8n OAuth redirect (found in your n8n Google Drive node setup). Connect your Google account when adding credentials in n8n. 2. Workflow Nodes Overview Manual Trigger – Starts the workflow manually. Set Image Prompt – Stores the prompt text and base file name (e.g., “Make an image of an attractive woman standing in New York City”). Duplicate Rows (Code Node) – Creates multiple "runs" of the same prompt for variation. Loop Over Items – Processes each variation one at a time. Generate an image (OpenAI DALL·E 2) – Sends the prompt to OpenAI and retrieves an image. Upload to Google Drive – Saves each generated image to your chosen Google Drive folder. 3. Building the Workflow in n8n Step 1 — Manual Trigger Add a Manual Trigger node to start the workflow manually when testing. Step 2 — Set Image Prompt Add a Set node with two fields: Prompt → The image description text. Name → The base name for the saved file. Example: | Name | Value | |--------|---------------------------------------------------------------| | Prompt | Make an image of an attractive woman standing in New York City | | Name | woman-nyc | Step 3 — Duplicate Rows (Code Node) Use this JavaScript to create three copies of the prompt (run 1, run 2, run 3): const original = items[0].json; return [ { json: { ...original, run: 1 } }, { json: { ...original, run: 2 } }, { json: { ...original, run: 3 } }, ]; Step 4 — Loop Over Items Insert a Split in Batches node and set the batch size to 1. This ensures each prompt variation runs through the image generation process individually. Connect this node so it runs after the Duplicate Rows node. Step 5 — Generate Image Add the OpenAI Image Generation node and configure it as follows: Model**: dall-e-2 Prompt**: ={{ $json.Prompt }} Leave other options at their defaults unless you want to specify image size or style. Connect your OpenAI API credentials created in Step 1. This node will send the current prompt in the batch to OpenAI's DALL·E 2 model and return an AI-generated image. Step 6 — Upload to Google Drive Add a Google Drive node and configure it to store the generated image: File Name**: ={{ $('Set Image Prompt').item.json.Name }} - {{ $('Duplicate Rows').item.json.run }} Folder ID**: Select the target Google Drive folder where images should be saved. Connect your Google Drive OAuth2 API credentials. The node will upload each generated image to your chosen Google Drive location, with a unique filename for each variation. Running the Workflow Execute the workflow manually. The process will: Loop through each prompt variation. Generate an image using OpenAI DALL·E 2. Upload the image to Google Drive with a unique name. You will find all generated images in the selected Google Drive folder. Customization Tips Change the number of variations by editing the Duplicate Rows code. Adjust the prompt dynamically from other data sources like Google Sheets, webhooks, or forms. Schedule the workflow to run at specific times or trigger it via an API call. Created by Robert A. – Ynteractive Website: https://ynteractive.com Email: robert@ynteractive.com
by Intuz
This n8n template from Intuz provides a complete and automated solution for real-time financial reporting. It instantly syncs new QuickBooks invoices to Google Sheets, using specific invoice data or keywords as triggers to ensure your financial records are always accurate and up-to-date. It uses a webhook to capture every new or updated invoice and logs the essential details into a designated Google Sheet. Perfect for creating custom reports, data backups, or a real-time dashboard of your accounts receivable. Use Cases Financial Reporting:** Create a simple, shareable Google Sheet for team members who don't have QuickBooks access. Data Backup:** Maintain a secure, independent log of all your invoices outside of the QuickBooks ecosystem. Custom Dashboards:** Use the Google Sheet as a data source for tools like Google Data Studio or Grafana to build custom financial dashboards. Auditing:** Easily track the history and status of all invoices in a simple, searchable spreadsheet format. How it Works 1. Instant Webhook Trigger: The workflow activates the moment an invoice is created or updated in QuickBooks. The QuickBooks webhook sends a notification to n8n, kicking off the process in real time. 2. Fetch Full Invoice Details: The initial webhook notification only contains the invoice ID. This node uses that ID to make a call back to the QuickBooks API and retrieve the complete invoice data, including customer name, due date, and more. 3. Format Key Data: A simple Code node cleans up the data fetched from QuickBooks. It extracts only the fields you need—ID, Domain, Customer Name, and Due Date—and structures them perfectly for the next step. 4. Append or Update in Google Sheets: The final node connects to your Google Sheet and uses the powerful "Append or Update" operation. If the ID of the invoice doesn't exist in the sheet, it adds a new row. If the ID already exists, it updates the existing row with the latest information. This ensures your Google Sheet is always a perfect mirror of your QuickBooks invoice data, preventing duplicates and keeping everything current. Setup Instructions For this workflow to run successfully, follow these setup steps: 1. Credentials: QuickBooks: Connect your QuickBooks account credentials to n8n. Google: Connect your Google account using OAuth2 credentials. Ensure the Google Sheets and Google Drive APIs are enabled. 2. QuickBooks Webhook Configuration: Activate the workflow. Copy the Production URL from the Webhook node. In your Intuit Developer Portal, go to the webhooks section for your app. Paste the URL and subscribe to Invoice events (e.g., Create, Update). 3. Google Sheet Setup: Create a Google Sheet for your invoice data. Crucially, create the following headers in the first row of your sheet: -ID -Domain -Customer Name -Due Date 4. Node Configuration: In the Append or update row in sheet node, select your Google Sheet document and the specific sheet name from the dropdown lists. The columns should map automatically if you've set up the headers correctly. Connect with us Website: https://www.intuz.com/cloud/stack/n8n Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz
by Ai Lin ⌘
🎯 What It Does: This project lets you talk to Siri (via Apple Shortcuts) and record or query your daily spending. The shortcut sends your message to an n8n Webhook, which uses AI to decide whether it’s for writing or reading finance data, then replies with a human-friendly message — all powered by n8n + AI + Google Sheets. ⸻ 🌐 PART 1: n8n Setup 🧩 1. Create a Webhook Trigger in n8n • Add a node: Webhook • Set HTTP Method: POST • Set Path: siri-finance • Enable “Respond to Webhook” = ✅ 🧠 2. Add AI Agent Node (e.g. OpenAI, Ollama, Gemini) • Use system prompt like: You are a finance assistant. Decide if the user wants to record or read transactions. If it's recording, return a JSON object with date, type, name, amount, and expense/income. If it's reading, return date range and type (Expense/Income). Always reply with a human-friendly summary. • Input: {{ $json.text }} (from webhook) • Output: structured json.output 🧮 3. (Optional) Add Logic to write to DB / Supabase / Google Sheets • Append tool: Adds a new row • Read tool: Queries past data Now your n8n flow is ready! ⸻ 📱 PART 2: iOS Shortcut Setup ⚙️ 1. Create a new Shortcut • Name it: 記帳助理 (or Finance Bot) • Add Action: Ask for Input • Prompt: “請說出你的記帳內容” • Input Type: Text • Add Action: Get Contents of URL • Method: POST • URL: https://your-n8n-domain/webhook/siri-finance • Headers: Content-Type: application/json • Request Body: { "text": "Provided Input" } • Replace "Provided Input" with Magic Variable → Input Result 🔊 2. Show Result • Add Action: Show Result • Content: Get Contents of URL 🗣️ 3. Optional: Add “Speak Text” • If you want Siri to speak it back, add Speak Text after Show Result. ⸻ ✅ Example Usage • You: “Hey Siri, 開支$50 早餐” • Siri: “已記錄支出:項目 早餐,金額 $50,已寫入” Or • You: “查一下我過去7日用了幾多錢” • Siri: “你過去7日總支出為 $7684.64,包括:⋯⋯” ⸻ 📦 Files to Share You can package the following: • .shortcut file export • Sample n8n workflow .json • Optional Supabase schema / Google Sheet template ⸻ 💡 Tips for Newcomers • Keep your Webhook public but protect with token if needed. • Ensure you handle emoji and newline safely for iOS compatibility. • Add logging nodes in n8n to help debug Siri messages. ⸻ 🗣️ Optional Project Name “Siri 記帳助理” / “Finance VoiceBot” A simple voice-first way to manage your daily expenses.
by Rizky Febriyan
How It Works This workflow automates the analysis of security alerts from Sophos Central, turning raw events into actionable intelligence. It uses the official Sophos SIEM integration tool to fetch data, enriches it with VirusTotal, and leverages Google Gemini to provide a real-time threat summary and mitigation plan via Telegram. Prerequisite (Important): This workflow is triggered by a webhook that receives data from an external Python script. You must first set up the Sophos-Central-SIEM-Integration script from the official Sophos GitHub. This script will fetch data and forward it to your n8n webhook URL. Tool Source Code: Sophos/Sophos-Central-SIEM-Integration The n8n Workflow Steps Webhook: Receives enriched event and alert data from the external Python script. IF (Filter): Immediately filters the incoming data to ensure only events with a high or critical severity are processed, reducing noise from low-priority alerts. Code (Prepare Indicator): Intelligently inspects the Sophos event data to extract the primary threat indicator. It prioritizes indicators in the following order: File Hash (SHA256), URL/Domain, or Source IP. HTTP Request (VirusTotal): The extracted indicator is sent to the VirusTotal API to get a detailed reputation report, including how many security vendors flagged it as malicious. Code (Prompt for Gemini): The raw JSON output from VirusTotal is processed into a clean, human-readable summary and a detailed list of flagging vendors. AI Agent (Google Gemini): All collected data—the original Sophos log, the full alert details, and the formatted VirusTotal reputation—is compiled into a detailed prompt for Gemini. The AI acts as a virtual SOC analyst to: Create a concise incident summary. Determine the risk level. Provide a list of concrete, actionable mitigation steps. Telegram: The complete analysis and mitigation plan from Gemini is formatted into a clean, easy-to-read message and sent to your specified Telegram chat. Setup Instructions Configure the external Python script to forward events to this workflow's Production URL. In n8n, create Credentials for Google Gemini, VirusTotal, and Telegram. Assign the newly created credentials to the corresponding nodes in the workflow.
by Oneclick AI Squad
A lightweight no-code workflow that captures student check-in data via a mobile app or webhook, stores it in a Google Sheet, and instantly notifies the class teacher via email. 🎯 What This Does Students check in using a mobile app or QR code Their data is formatted and saved to a Google Sheet A notification email is sent to the class teacher in real time 🔧 Workflow Steps | Step | Description | | ------------------------------ | ----------------------------------------------------------- | | Student Check-in (Webhook) | Triggered via POST request from mobile app or QR scanner | | Format Data | Cleans and prepares incoming JSON into structured format | | Append or Update Row | Saves student check-in data into Google Sheets | | Email Teacher | Sends formatted check-in email to the class teacher | | Success Response | Returns a confirmation response to the mobile app or system | 📱 Example Check-in Input (Webhook Body) { "student_name": "Aarav Mehta", "student_id": "STU025", "class_name": "Grade 6B" } 📊 Google Sheets Format | Student Name | Student ID | Class | Date | Time | | ------------ | ---------- | -------- | ---------- | ----- | | Aarav Mehta | STU025 | Grade 6B | 2025-08-06 | 08:35 | Date and time are added dynamically in the workflow. ⚙️ Setup Requirements n8n Instance** – Deployed with public webhook support Google Sheets** – Sheet with columns as shown above Email SMTP Settings** – For sending teacher notification ✅ Quick Setup Instructions Import the workflow into your n8n instance Replace the webhook URL in your mobile app Set your Google Sheet ID and range Enter the teacher’s email in the “Email Teacher” node Test with mock data Deploy and use live!
by Alex Kim
Automate Video Creation with Luma AI Dream Machine and Airtable (Part 2) Description This is the second part of the Luma AI Dream Machine automation. It captures the webhook response from Luma AI after video generation is complete, processes the data, and automatically updates Airtable with the video and thumbnail URLs. This completes the end-to-end automation for video creation and tracking. 👉 Airtable Base Template 👉 Tutorial Video Setup 1. Luma AI Setup Ensure you’ve created an account with Luma AI and generated an API key. Confirm that the API key has permission to manage video requests. 2. Airtable Setup Make sure your Airtable base includes the following fields (set up in Part 1): Use the Airtable Base Template linked above to simplify setup. Generation ID** – To match incoming webhook data. Status** – Workflow status (e.g., "Done"). Video URL** – Stores the generated video URL. Thumbnail URL** – Stores the thumbnail URL. 3. n8n Setup Ensure that the n8n workflow from Part 1 is set up and configured. Import this workflow and connect it to the webhook callback from Luma AI. How It Works 1. Webhook Trigger The Webhook node listens for a POST response from Luma AI once video generation is finished. The response includes: Video URL – Direct link to the video. Thumbnail URL – Link to the video thumbnail. Generation ID – Used to match the record in Airtable. 2. Process Webhook Data The Set node extracts the video data from the webhook response. The If node checks if the video URL is valid before proceeding. 3. Store in Airtable The Airtable node updates the record with: Video URL – Direct link to the video. Thumbnail URL – Link to the video thumbnail. Status – Marked as "Done." Uses the Generation ID to match and update the correct record. Why This Workflow is Useful ✅ Automates the completion step for video creation ✅ Ensures accurate record-keeping by matching generation IDs ✅ Simplifies the process of managing and organizing video content ✅ Reduces manual effort by automating the update process Next Steps Future Enhancements** – Adding more complex post-processing, video trimming, and multi-platform publishing.
by Jimleuk
This n8n template offers a simple yet capable chatbot assistant who can answer course enquiries over SMS. Given the right access to data, AI Agents are capable of planning and performing relatively complex research tasks to get their answers. In this example, the agent must first understand the database schema, retrieve lists of values before generating it's own query to search over the database. Checkout the example database here - https://airtable.com/appO5xvP1aUBYKyJ7/shr8jSFDaghubDOrw How it works A Twilio trigger gives us the ability to receive SMS input into our workflow via webhook. The message is then directed to our AI agent who is instructed to assist the user and use the course database as reference. The database is an Airtable base. The agent autonomously figures out which tool it needs to use and generates it's own "filter_by_formula" query to search over the available courses. On successful search results, the Agent can then use this information to answer the user's query. The Agent's output is logged in a second sheet of the Airtable base. We can use this later for analysis and lead gen. Finally, the response is sent back to the user through SMS using Twilio. How to use Ensure your Twilio number is set to forward messages to this workflow's webhook URL. Configure and update the course database as required. If you're not interested in courses, you can swap this out for inventory, deliveries or any other data relevant to your business. Ask questions like: "Can you help me find suitable courses to fill my Wednesday mornings?" "Which courses are being instructed by profession Lee?" "I'm interested in creative arts. What courses are available which could be relevant to me?" Requirements Twilio for SMS receiving and sending OpenAI for LLM and Agent Airtable for Course Database Customising this workflow Add additional tools and expand the range of queries the agent is able to answer or assist with. Not using Airtable? This technique also works with SQL databases like PostgreSQL.
by InfraNodus
Using the knowledge graphs instead of RAG vector stores This workflow creates an AI chatbot agent that has access to several knowledge bases at the same time (used as "experts"). These knowledge bases are provided using the InfraNodus GraphRAG using the knowledge graphs and providing high-quality responses without the need to set up complex RAG vector store workflows. The advantages of using GraphRAG instead of the standard vector stores for knowledge are: Easy and quick to set up (no complex data import workflows needed) A knowledge graph has a holistic view of your knowledge base Better retrieval of relations between the document chunks = higher quality responses How it works This template uses the n8n AI agent node as an orchestrating agent that decides which tool (knowledge graph) to use based on the user's prompt. Here's a description step by step: The user submits a question using the AI chatbot (n8n interface, in this case, which can be accessed via a URL or embedded to any website) The AI agent node checks a list of tools it has access to. Each tool has a description of the knowledge it has auto-generated by InfraNodus. The AI agent decides which tool should be used to generate a response. It may reformulate user's query to be more suitable for the expert. The query is then sent to the InfraNodus HTTP node endpoint, which will query the graph that corresponds to that expert. Each InfraNodus GraphRAG expert provides a rich response that takes the whole context into account and provides a response from each expert (graph) along with a list of relevant statements retrieved using a combination or RAG and GraphRAG. The n8n AI Agent node integrates the responses received from the experts to produce the final answer. The final answer is sent back to the user's chat (or a webhook endpoint) How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Create a separate knowledge graph for each expert (using PDF / content import options) in InfraNodus For each graph, go to the workflow, paste the name of the graph into the body name field. Keep other settings intact or learn more about them at the InfraNodus access points page. Once you add one or more graphs as experts to your flow, add the LLM key to the OpenAI node and launch the workflow Requirements An InfraNodus account and API key An OpenAI (or any other LLM) API key Customizing this workflow You can use this same workflow with a Telegram bot, so you can interact with it using Telegram. There are many more customizations available. Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/20174217658396-Using-InfraNodus-Knowledge-Graphs-as-Experts-for-AI-Chatbot-Agents-in-n8n Also check out the video tutorial with a demo:
by ist00dent
This n8n template empowers you to instantly fetch a list of public holidays for any given year and country using the Nager.Date API. This is incredibly useful for scheduling, planning, or integrating holiday data into various business and personal automation workflows. 🔧 How it works Receive Holiday Request Webhook: This node acts as the entry point, listening for incoming POST requests. It expects a JSON body containing the year (e.g., 2025) and countryCode (e.g., US for United States, PH for Philippines, DE for Germany) for which you want to retrieve public holidays. Get Public Holidays: This node makes an HTTP GET request to the Nager.Date API (date.nager.at). It dynamically uses the year and countryCode from your webhook request to query the API. The API responds with a JSON array, where each object represents a public holiday with details like its date, name, and type. Respond with Holiday Data: This node sends the full list of public holidays received from Nager.Date back to the service that initiated the webhook. 👤 Who is it for? This workflow is ideal for: Businesses with International Operations: Automatically check holidays for different country branches to adjust production schedules, customer service hours, or delivery estimates. HR & Payroll Departments: Accurately calculate workdays, plan leave schedules, or process payroll taking public holidays into account. Event Planners: Avoid scheduling events on public holidays, which could impact attendance or venue availability. Travel Agencies: Inform clients about holidays in their destination country that might affect local business hours or attractions. Content & Social Media Schedulers: Plan content around national holidays to maximize engagement or avoid insensitive postings. Personal Productivity & Travel Planning: Integrate holiday data into your calendar or task management tools to plan trips or personal time off more effectively. Developers: Easily integrate a reliable source of public holiday data into custom applications, dashboards, or internal tools without managing complex datasets. 📑 Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "year": 2025, "countryCode": "PH" // Example: "US", "DE", "GB", etc. } You can find a comprehensive list of supported country codes on the Nager.Date API documentation: https://www.nager.at/Country The workflow will return a JSON array, where each element is a holiday object, like this example for a single holiday: [ { "date": "2025-01-01", "localName": "New Year's Day", "name": "New Year's Day", "countryCode": "PH", "fixed": true, "global": true, "counties": null, "launchYear": null, "types": [ "Public" ] } // ... more holiday objects ] ⚙️ Setup Instructions Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Receive Holiday Request Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /public-holidays). Activate Workflow: Save and activate the workflow. 📝 Tips This workflow is a foundation for many powerful automations: Conditional Branching for Specific Holidays: Add an IF node after "Get Public Holidays" to check for a specific holiday (e.g., "Christmas Day"). You can then trigger different actions (e.g., send a reminder, adjust a schedule) only for that particular holiday. Filtering and Aggregating Data: Use a Filter node to only keep holidays of a certain type (e.g., "Public"). Use a Code or Function node to count the number of public holidays, or extract just the names and dates into a simpler list. Storing Holiday Data: Google Sheets/Airtable: Automatically append new holidays to a spreadsheet for easy reference or further analysis. Database: Store holiday data in a database (like PostgreSQL or MySQL) to build a custom holiday calendar application. Scheduling and Reminders: Connect this workflow to a Cron or Schedule node to run periodically (e.g., once a year at the start of the year). Use the retrieved holiday dates to set up reminders in your calendar (Google Calendar node) or send notifications (Slack, Email, SMS) a few days before an upcoming holiday. Integrate with Business Logic: Employee Leave Management: Cross-reference employee leave requests with public holidays to ensure accuracy. Automated Messages: Schedule automated "Happy Holiday" messages to customers or employees. E-commerce Shipping: Adjust estimated shipping times based on upcoming non-working days. API Key (Not needed for Nager.Date free tier): The Nager.Date API used here does not require an API key for basic public holiday lookups, which makes this template very easy to use out-of-the-box.
by Fan Luo
Daily Company News Bot This n8n template demonstrates how to use Free FinnHub API to retrieve the company news from a list stock tickers and post messages in Slack channel with a pre-scheduled time. How it works We firstly define the list of stock tickers you are interested Loop over items to call FinnHub API to get the latest company news for the ticker Then we format the company news as a markdown text content which could be sent to Slack Post a new message in Slack channel Wait for 5 seconds, then move to the next ticker How to use Simply setup a scheduler trigger to automatically trigger the workflow Requirements FinnHub API Key Slack channel webhook Need Help? Contact me via My Blog or ask in the Forum! Happy Hacking!