by David Harvey
🚨 Emergency Alerts Reporter to iMessage This n8n template fetches real-time emergency incident alerts from PulsePoint for a specific agency and delivers them directly to any phone number via iMessage using the Blooio API. It's designed to keep users informed with clear, AI-summarized reports of emergency activity near them—automatically and reliably. Use cases are powerful and immediate: Get real-time fire/medical alerts for your neighborhood. Use it for family, local safety groups, or even emergency response teams. Convert technical dispatch data into readable updates with emojis and plain English. 🧠 Good to Know You’ll need a PulsePoint agency ID (see instructions below). iMessages are sent using Blooio’s API (which supports Apple’s iMessage and fallback RCS/SMS). Messages are AI-enhanced using OpenAI's o4-mini model to summarize incident reports with context and urgency. The workflow runs every hour, but this can be configured to match your needs. Each report is sent only once, thanks to persistent tracking of seen incident IDs in workflow static memory. ⚙️ How it Works Trigger: A Schedule Trigger (every hour) or manual start kicks off the flow. Get Alerts: A code node fetches the latest PulsePoint incidents for a specified agency and decrypts the data. Filter New Incidents: We store previously seen incident IDs to prevent duplicate alerts. Merge Incidents: All new incident details are merged into a single payload. Condition Check: If there are no new incidents, nothing is sent. AI Summary: The incident data is passed to an AI agent for summarization with human-friendly emojis and formatting. Send Message: The final summary is sent via Blooio’s API to your phone using iMessage. 📝 How to Use Get Your PulsePoint Agency ID: Visit https://web.pulsepoint.org. Find your agency by location or name. Inspect the API call or browser network log to get the agencyid (e.g. 19100 from a URL like ?agencyid=19100). Set Up Blooio for Messaging: Sign up at https://blooio.com. Go to your account and retrieve your Bearer API Key. Pricing details available on their pricing page. Add your key to the HTTP Request node as a Bearer Token. OpenAI API: Create or use an existing OpenAI account. Use the o4-mini model for efficient, readable summaries. Get your OpenAI API key from https://platform.openai.com/account/api-keys. Add Your Phone Number: Replace +1111112222 with your actual number (international format). You can also modify the message content or prepend special tags/emojis. ✅ Requirements PulsePoint agency ID** – See usage instructions above OpenAI API Key** – Get API Key Blooio Account & Bearer Token** – Get Started Phone number** for iMessage delivery 🔧 Customizing This Workflow Change the schedule** to get alerts more or less frequently Add filters** to only get alerts for specific incident types (e.g. fires, traffic accidents) Send to groups**: Expand to send alerts to multiple recipients or use Slack instead of iMessage Use different AI prompts** to get detailed, humorous, or abbreviated alerts depending on your audience With just a few credentials and a phone number, you’ll have real-time incident alerts with human-friendly summaries at your fingertips. 🛠️ Stay informed. Stay safe.
by Sunny Thaper
Workflow Overview: This n8n workflow template takes a US phone number as input, validates it, and returns it in multiple standard formats, including handling extensions. It's designed to streamline the process of standardizing phone number data within your automations. How it Works: Input: Accepts a phone number string as input. This number can be in various common formats (e.g., (555) 123-4567, 555.123.4567, +15551234567, 5551234567x890). Formatting Removal: Strips all non-numeric characters to isolate the core number and any potential extension. Validation: Country Code Check:** Verifies if the number starts with the US country code (+1 or 1) or assumes US if no country code is present and the length is correct. Length Check:** Ensures the main number component consists of exactly 10 digits after stripping formatting and the country code. Output Generation (if valid): If the number passes validation, the workflow outputs the phone number in several standardized formats: Number Only:** 5551234567 E.164 Standard:** +15551234567 National Standard:** (555) 123-4567 Full National Standard:** 1 (555) 123-4567 International Standard:** 00-1-555-123-4567 Extension Handling: If an extension is detected in the input, it is separated and provided as: Extension (Number):** 890 Extension (String):** "890" Use Cases: Cleaning and standardizing phone number data in CRM systems. Formatting numbers before sending SMS messages via APIs. Validating user input from forms. Ensuring consistent phone number representation across different applications.
by Gavin
This Workflow does a HTTPs request to ConnectWise Manage through their REST API. It will pull all tickets in the "New" status or whichever status you like, and notify your dispatch team/personnel whenever a new ticket comes in using Microsoft Teams. Video Explanation https://youtu.be/yaSVCybSWbM
by Sirisak Chantanate
Workflow overview: This workflow is designed for dynamic and intelligent conversational capabilities. It incorporates Meta's llama3.3-versatile model for personal assistant. There are no issues when sending simple text to the LINE reply API, so in this workflow you can see how to handle large and complex text sending from AI chat without any errors. Workflow description: User uses Line Messaging API to send message to the chatbot, create line business ID from here: Line Business Set the message from Step 1 to the proper value Send the message to process at Groq using API key that we have created from Groq Send the reply message from AI Agent back to Line Messaging API account Key Features: Utilizes Meta's llama 3.3 model for robust conversational capabilities Handles large and complex text interactions with ease, ensuring reliable connections to LINE Messaging API Demonstrates effective strategies for processing and responding to large and complex text inputs from AI chat To use this template, you need to be on n8n version 1.79.0 or later.
by Aitor | 1Node
Template Description This template creates a powerful Retrieval Augmented Generation (RAG) AI agent workflow in n8n. It monitors a specified Google Drive folder for new PDF files, extracts their content, generates vector embeddings using Cohere, and stores these embeddings in a Milvus vector database. Subsequently, it enables a RAG agent that can retrieve relevant information from the Milvus database based on user queries and generate responses using OpenAI, enhanced by the retrieved context. Functionality The workflow automates the process of ingesting documents into a vector database for use with a RAG system. Watch New Files: Triggers when a new file (specifically targeting PDFs) is added to a designated Google Drive folder. Download New: Downloads the newly added file from Google Drive. Extract from File: Extracts text content from the downloaded PDF file. Default Data Loader / Set Chunks: Processes the extracted text, splitting it into manageable chunks for embedding. Embeddings Cohere: Generates vector embeddings for each text chunk using the Cohere API. Insert into Milvus: Inserts the generated vector embeddings and associated metadata into a Milvus vector database. When chat message received: Adapt the trigger tool to fit your needs. RAG Agent: Orchestrates the RAG process. Retrieve from Milvus: Queries the Milvus database with the user's chat query to find the most relevant chunks. Memory: Manages conversation history for the RAG agent to optimize cost and response speed. OpenAI / Cohere embeddings: Uses ChatGPT 4o for text generation. Requirements To use this template, you will need: An n8n instance (cloud or self-hosted). Access to a Google Drive account to monitor a folder. A Milvus instance or access to a Milvus cloud service like Zilliz. A Cohere API key for generating embeddings. An OpenAI API key for the RAG agent's text generation. Usage Set up the required credentials in n8n for Google Drive, Milvus, Cohere, and OpenAI. Configure the "Watch New Files" node to point to the Google Drive folder you want to monitor for PDFs. Ensure your Milvus instance is running and the target cluster is set up correctly. Activate the workflow. Add PDF files to the monitored Google Drive folder. The workflow will automatically process them and insert their embeddings into Milvus. Interact with the RAG agent. The agent will use the data in Milvus to provide context-aware answers. Benefits Automates document ingestion for RAG applications. Leverages Milvus for high-performance vector storage and search. Uses Cohere for generating high-quality text embeddings. Enables building a context-aware AI agent using your own documents. Suggested improvements Support for More File Types:** Extend the "Watch New Files" node and subsequent extraction steps to handle various document types (e.g., .docx, .txt, .csv, web pages) in addition to PDFs. Error Handling and Notifications:** Implement robust error handling for each step of the workflow (e.g., failed downloads, extraction errors, Milvus insertion failures) and add notification mechanisms (e.g., email, Slack) to alert the user. Get in touch with us Contact us at https://1node.ai
by Antonio Trento
🤖 Auto-Publish SEO Blog Posts for Jekyll with AI + GitHub + Social Sharing This workflow automates the entire process of publishing SEO-optimized blog posts (e.g., recipes) to a Jekyll site hosted on GitHub. It uses LangChain + OpenAI to write long-form Markdown articles, and commits them directly to your repository. Optional steps include posting to X (Twitter) and LinkedIn. 🔧 Features 📅 Scheduled Execution: Runs daily or manually. 📥 CSV Input: Reads from a local CSV (/data/recipes.csv) with fields like title, description, keywords, and publish date. ✍️ AI Copywriting: Uses a GPT-4 model to generate a professional, structured blog post optimized for SEO in Markdown format. 🧪 Custom Prompting: Includes a detailed, structured prompt tailored for Italian food blogging and SEO rules. 🗂 Markdown Generation: Automatically builds the Jekyll front matter. Generates a clean SEO-friendly slug. Saves to _posts/YYYY-MM-DD-title.md. ✅ Commits to GitHub: Auto-commits new posts using GitHub node. 🧹 Post-Processing: Removes processed lines from the source CSV. 📣 (Optional) Social media sharing: Can post title to X (Twitter) and LinkedIn. 📁 CSV Format Example titolo;prompt_descrizione;keyword_principale;keyword_secondarie;data_pubblicazione Pasta alla Norma;Classic Sicilian eggplant pasta...;pasta alla norma;melanzane, ricotta salata;2025-07-04T08:00:00
by Oneclick AI Squad
Overview This solution ensures the secure backup and version control of your self-hosted n8n workflows by storing them in a GitLab repository. It compares current workflows with their GitLab counterparts, updates files when differences are detected, and organizes them in user-specific folders (e.g., repo -> username -> workflow.json). Backups are triggered manually or weekly, with a success notification sent via email. Operational Process Manual Backup Trigger**: Initiates the backup process on demand. Scheduled Weekly Backup**: Automatically triggers the backup every week. Fetch N8N Workflows**: Retrieves all workflows from n8n using the API (getAll:workflow). Prepare Backup Metadata**: Generates metadata, including user details for folder organization. Process Each Workflow**: Handles each workflow individually for processing. Format Workflow for GitLab**: Structures workflows with proper versioning for GitLab compatibility. Rate Limit Control**: Manages API rate limits to ensure smooth operation. Create to GitLab Repository**: Saves workflows to GitLab; creates a new file if it doesn’t exist. Check Backup Status**: Verifies if the file exists; if true, proceeds to update; if false, loops back. Update Backup Summary**: Updates the existing file in GitLab with the latest version. Log Backup Results**: Records the outcome of the backup process. Send Email**: Sends a confirmation email: "Hello, The scheduled backup of all n8n workflows has been completed successfully. All workflows have been committed to the GitLab repository without any errors. Regards, n8n Automation Bot" Implementation Guide Import this solution into your n8n instance. Configure GitLab API credentials and specify the target repository. Set up n8n API access to enable workflow retrieval. Customize the Prepare Backup Metadata node to map users to folders as needed. Test the process using the Manual Backup Trigger to confirm GitLab integration. Schedule weekly backups via the Scheduled Weekly Backup node (recommended for Fridays). Requirements GitLab API credentials with write access n8n API access for workflow retrieval A configured GitLab repository Customization Options Adjust the Prepare Backup Metadata node to include additional user fields. Modify the Rate Limit Control node to accommodate varying API limits. Tailor the Send Email node to include custom notification details.
by Agent Circle
This n8n template demonstrates how to use AI to generate custom images from scratch - fully automated, prompt-driven, and ready to deploy at scale. Use cases are many: You can use it for marketing visuals, character art, digital posters, storyboards, or even daily image generation for your personal purposes. How It Works The flow is triggered by a chat message in N8N or via Telegram. The default image size is 1080 x 1920 pixels. To use a different size, update the values in the “Fields - Set Values” node before triggering the workflow. The input is parsed into a clean, structured prompt using a multi-step transformation process. Our AI Agent sends the final prompt to Google Gemini’s image model for generation (you can also integrate with OpenAI or other chat models). The raw image data created by the AI Agent will be run through a number of codes to make sure it's feasible for your preview if needed and downloading. Then, we use an HTTP node to fetch the result so you can preview the image. You can send it back to the chat message in N8N or Telegram, or save it locally to your disk. How To Use Download the workflow package. Import the package into your N8N interface. Set up the credentials in the following nodes for tool access and usability: "Telegram Trigger"; "AI Agent - Create Image From Prompt"; "Telegram Response" or "Save Image To Disk" (based on your wish). Activate the "Telegram Response" OR "Save Image To Disk" node to specify where you want to save your image later. Open the chat interface (via N8N or Telegram). Type your image prompt or detailed descriptions and send. Wait for the process to run and finish in a few seconds. Check the result in your desired saving location. Requirements Google Gemini account with image generation access. Telegram bot access and chat setup (optional). Connection to local storage (optional). How To Customize We’re setting the default image size to 1080 x 1920 pixels and the default image model to "flux". You can customize both of these values in the “Fields – Set Values” node. Supported image model options include: "flux", "kontext", "turbo", and "gptimage". In the “AI Agent – Create Image From Prompt” node, you can also change the AI chat model. By default, it uses Google Gemini, but you can easily replace it with OpenAI ChatGPT, Microsoft AI Copilot, or any other compatible provider. Need Help? Join our community on different platforms for support, inspiration and tips from others. Website: https://www.agentcircle.ai/ Etsy: https://www.etsy.com/shop/AgentCircle Gumroad: http://agentcircle.gumroad.com/ Discord Global: https://discord.gg/d8SkCzKwnP FB Page Global: https://www.facebook.com/agentcircle/ FB Group Global: https://www.facebook.com/groups/aiagentcircle/ X: https://x.com/agent_circle YouTube: https://www.youtube.com/@agentcircle LinkedIn: https://www.linkedin.com/company/agentcircle
by Richard Uren
Shopify GraphQL cursor loop Many Shopify GraphQL queries have the ability to return a cursor which you can loop over, however the N8N GraphQL node does not natively have the ability to fetch pages. This simple 3 node workflow displays how to setup a cursor to fetch all items in a collection. Note : The pageSize in the "Shopify, products" node is set to 5 to illustrate how querying by cursor works. In production set this to a much larger value. Also, Update the Endpoint in GraphQL node to reflect your Shopify store.
by Eric
This is a specific use case. The ElevenLabs guide for Cal.com bookings is comprehensive but I was having trouble with the booking API request. So I built a simple workflow to validate the request and handle the booking creation. Who's this for? You have an ElevenLabs voice agent (or other external service) booking meetings in your Cal.com account and you want more control over the book_meeting tool called by the voice agent. How's it work? Request is received by the webhook trigger node Request sent from ElevenLabs voice agent, or other source Request body contains contact info for the user with whom a meeting will be booked in Cal.com Workflow validates input data for required fields in Cal.com If validation fails, a 400 bad request response is returned If valid, meeting is booked in Cal.com api How do I use this? Create a custom tool in the ElevenLabs agent setup, and connect it to the webhook trigger in this workflow. Add authorization for security. Instruct your voice agent to call this tool after it has collected the required information from the user. Expected input structure Note: Modify this according to your needs, but be sure to reflect your changes in all following nodes. Requirements here depend on required fields in your Cal.com event type. If you have multiple event types in Cal.com with varying required fields, you'll need to handle this in this workflow, and provide appropriate instructions in your *voice agent prompt*. "body": { "attendee_name": "Some Guy", "start": "2025-07-07T13:30:00Z", "attendee_phone": "+12125551234", "attendee_timezone": "America/New_York", "eventTypeId": 123456, "attendee_email": "someguy@example.com", "attendee_company": "Example Inc", "notes": "Discovery call to find synergies." } Modifications Note: ElevenLabs doesn't handle webhook response headers or body, and only recognizes the response code. In other words, if the workflow responds with 400 Bad request that's the only info the voice agent gets back; it doesn't get back any details, eg. "User email still needed". You can modify the structure of the expected webhook request body, and then you should reflect that structure change in all following nodes in the workflow. Ie. if you change attendee_name to attendeeFirstName and attendeeLastName then you need to make this change in the following nodes that use these properties. You can also require or make optional other user data for the Cal.com event type which would reduce or increase the data the voice agent must collect from the user. You can modify the authorization of this webhook to meet your security needs. ElevenLabs has some limitations and you should be mindful of those, but it also offers a secret feature with proves useful. An improvement to this workflow could include a GET request to a CRM or other db to get info on the user interacting with the voice agent. This could reduce some of the data collection needed from the voice agent, like if you already have the user's email address, for example. I believe you can also get the user's phone number if the voice agent is set up on a dial-in interface, so then the agent wouldn't need to ask for it. This all depends on your use case. A savvy step might be prompting the voice agent to get an email, and using the email in this workflow to pull enrichment data from Apollo.io or similar ;-)
by Jonathan
This workflow checks a Google Calendar at 8am on the first of each month to get anything that has been marked as a Holiday or Illness. It then merges the count for each person and sends an email with the list. To use this workflow you will need to set the credentials to use for the Google Calendar node and Send Email node. You will also need to select the calendar ID and fill out the information in the send email node. This workflow searches for Events that contain "Holiday" or "Illness" in the summary. If you want to change this you can modify it in the Switch node.
by Baptiste Fort
Still reminding people about their tasks manually every morning? Let’s be honest — who wants to start the day chasing teammates about what they need to do? What if Slack could do it for you — automatically, at 9 a.m. every day — without missing anything, and without you lifting a finger? In this tutorial, you’ll build a simple automation with n8n that checks Airtable for active tasks and sends reminders in Slack, daily. Here’s the flow you’ll build: Schedule Trigger → Search Records (Airtable) → Send Message (Slack) STEP 1 : Set up your Airtable base Create a new base called Tasks Add a table (for example: Projects, To-Do, or anything relevant) Add the following fields: | Field | Type | Example | | -------- | ----------------- | ------------------------------------------- | | Title | Text | Finalize quote for Client A | | Assignee | Text | Baptiste Fort | | Email | Email | claire@email.com | | Status | Single select | In Progress / Done | | Due Date | Date (dd/mm/yyyy) | 05/07/2025 | Add a few sample tasks with the status In Progress so you can test your workflow later. STEP 2 Create the trigger in n8n In n8n, add a Schedule Trigger node Set it to run every day at 9:00 a.m.: Trigger interval: Days Days Between Triggers: 1 Trigger at hour: 9 Trigger at minute: 0 This is the node that kicks off the workflow every morning. STEP 3 : Search for active tasks in Airtable This step is all about connecting n8n to your Airtable base and pulling the tasks that are still marked as "In Progress". 1. Add the Airtable node In your n8n workflow, add a node called: Airtable → Search Records You can find it by typing "airtable" in the node search. 2. Create your Airtable Personal Access Token If you haven’t already created your Airtable token, here’s how: 🔗 Go to: https://airtable.com/create/tokens Then: Name your token something like TACHES Under Scopes, check: ✅ data.records:read Under Access, select only the base you want to use (e.g. “Tâches”) Click “Save token” Copy the personal token 3. Set up the Airtable credentials in n8n In the Airtable node: Click on the Credentials field Select: Airtable Personal Access Token Click Create New Paste your token Give it a name like: My Airtable Token Click Save 4. Configure the node Now fill in the parameters: Base: Tâches Table: Produits (or Tâches, depending on what you called it) Operation: Search Filter By Formula: {Statut} = "En cours" Return All: ✅ Yes (make sure it’s enabled) Output Format: Simple 5. Test the node Click “Execute Node”. You should now see all tasks with Statut = "En cours" show up in the output (on the right-hand side of your screen), just like in your screenshot. STEP 4: Send each task to Slack Now that we’ve fetched all the active tasks from Airtable, let’s send them to Slack — one by one — using a loop. Add the Slack node Drag a new node into your n8n workflow and select: Slack → Message Name it something like Send Slack Message You can find it quickly by typing "Slack" into the node search bar. Connect your Slack account If you haven't already connected your Slack credentials: Go to n8n → Credentials Select Slack API Click Create new Paste your Slack Bot Token (from your Slack App OAuth settings) Give it a clear name like Slack Bot n8n Choose the workspace and save Then, in the Slack node, choose this credential from the dropdown. Configure the message Set these parameters: Operation: Send Send Message To: Channel Channel: your Slack channel (e.g. #tous-n8n) Message Type: Simple Text Message Message template Paste the following inside the Message Text field: Message template Paste the following inside the Message Text field: New task for {{ $json.name }}: {{ $json["Titre"] }} 👉 Deadline: {{ $json["Date limite"] }} Example output: New task for Jeremy: Relancer fournisseur X 👉 Deadline: 2025-07-04 Test it Click Execute Node to verify the message is correctly sent in Slack. If the formatting works, you’re ready to run it on schedule 🚀