by Arnaud MARIE
Replicate Line Items on New Deal in HubSpot Workflow Use Case This workflow solves the problem of manually copying line items from one deal to another in HubSpot, reducing manual work and minimizing errors. What this workflow does Triggers** upon receiving a webhook with deal IDs. Retrieves** the IDs of the won and created deals. Fetches** line items associated with the won deal. Extracts** product SKUs from the retrieved line items. Fetches** product details based on SKUs. Creates** new line items for the created deal and associates them. Sends** a Slack notification with success details. Step up steps Create a HubSpot Deal Workflow 1.1 Set up your trigger (ex: when deal stage = Won) 1.2 Add step : Create Record (deal) 1.3 Add Step : Send webhook. The webhook should be a Get to your n8n first trigger. Set two query parameter : deal_id_won as the Record ID of the deal triggering the HubSpot Workflow deal_id_create as the Record ID of the deal created above. Click Insert Data -> The created object Set up your HubSpot App token in HubSpot -> Settings -> Integration -> Private Apps Set up your HubSpot Token integration using the predefined model. Set up your Slack connection Add an error Workflow to monitor errors
by n8n Team
The workflow is an automated process designed for incident management and tracking, specifically by integrating Splunk alerts with a Jira ticketing system using n8n. The initial step in the workflow is a Webhook Trigger, which is set up to receive POST requests with data from Splunk to initiate the workflow. Once the workflow is triggered, the "Set Host Name" node cleans up the hostname received from Splunk, ensuring that it is alphanumeric for consistency and security purposes. Subsequently, the "Search Ticket" node interacts with Jira through a Jira Query Language (JQL) request to locate any existing issues that match the sanitized hostname. The workflow splits at the "IF Ticket Not Exists" node, which checks for the presence of a key indicating a matching issue. If an issue exists, the workflow proceeds to add a comment to the identified issue, and if not, it creates a new Jira issue. At the false path, the "Add Ticket Comment" node appends a new comment to the existing Jira issue, encapsulating details from the Splunk alert, such as the timestamp and the alert description.
by Eric
This is a specific use case. The ElevenLabs guide for Cal.com bookings is comprehensive but I was having trouble with the booking API request. So I built a simple workflow to validate the request and handle the booking creation. Who's this for? You have an ElevenLabs voice agent (or other external service) booking meetings in your Cal.com account and you want more control over the book_meeting tool called by the voice agent. How's it work? Request is received by the webhook trigger node Request sent from ElevenLabs voice agent, or other source Request body contains contact info for the user with whom a meeting will be booked in Cal.com Workflow validates input data for required fields in Cal.com If validation fails, a 400 bad request response is returned If valid, meeting is booked in Cal.com api How do I use this? Create a custom tool in the ElevenLabs agent setup, and connect it to the webhook trigger in this workflow. Add authorization for security. Instruct your voice agent to call this tool after it has collected the required information from the user. Expected input structure Note: Modify this according to your needs, but be sure to reflect your changes in all following nodes. Requirements here depend on required fields in your Cal.com event type. If you have multiple event types in Cal.com with varying required fields, you'll need to handle this in this workflow, and provide appropriate instructions in your *voice agent prompt*. "body": { "attendee_name": "Some Guy", "start": "2025-07-07T13:30:00Z", "attendee_phone": "+12125551234", "attendee_timezone": "America/New_York", "eventTypeId": 123456, "attendee_email": "someguy@example.com", "attendee_company": "Example Inc", "notes": "Discovery call to find synergies." } Modifications Note: ElevenLabs doesn't handle webhook response headers or body, and only recognizes the response code. In other words, if the workflow responds with 400 Bad request that's the only info the voice agent gets back; it doesn't get back any details, eg. "User email still needed". You can modify the structure of the expected webhook request body, and then you should reflect that structure change in all following nodes in the workflow. Ie. if you change attendee_name to attendeeFirstName and attendeeLastName then you need to make this change in the following nodes that use these properties. You can also require or make optional other user data for the Cal.com event type which would reduce or increase the data the voice agent must collect from the user. You can modify the authorization of this webhook to meet your security needs. ElevenLabs has some limitations and you should be mindful of those, but it also offers a secret feature with proves useful. An improvement to this workflow could include a GET request to a CRM or other db to get info on the user interacting with the voice agent. This could reduce some of the data collection needed from the voice agent, like if you already have the user's email address, for example. I believe you can also get the user's phone number if the voice agent is set up on a dial-in interface, so then the agent wouldn't need to ask for it. This all depends on your use case. A savvy step might be prompting the voice agent to get an email, and using the email in this workflow to pull enrichment data from Apollo.io or similar ;-)
by Samir Saci
Context Hey! I'm Samir, a Supply Chain Data Scientist from Paris who spent six years in China studying and working while struggling to learn Mandarin. I know the challenges of mastering a complex language like Chinese and my greatest support was flash cards. Therefore, I designed this workflow to support fellow Mandarin learners by automating flashcard creation using n8n, so they can focus more on learning and less on manual data entry. ๐ฌ For business inquiries, you can add me on Here Who is this template for? This workflow template is designed for language learners and educators who want to automate the creation of flashcards for Mandarin (or any other language) using Google Translate API, an AI agent for phonetic transcription and generating an illustrative sentence and a free image retrieval API. Why? If you use the open-source application Anki, this workflow will help you automatically generate personalized study materials. How? Let us imagine you want to learn how to say the word Contract in Mandarin. The workflow will automatically Translate the word in Simplified Mandarin (Mandarin: ๅๅ). Provide the phonetic transcription (Pinyin: Hรฉtรณng) Generate an example sentence (Example: ๆไปฌ็ญพ่ฎขไบไธไปฝๅๅ.) Download an illustrative picture (For example, a picture of a contract signature) All these fields are automatically recorded in a Google Sheet, making it easy to import into Anki and generate flashcards instantly What do I need to start? This workflow can be used with the free tier plans of the services used. It does not require any advanced programming skills. Prerequisite A Google Drive Account with a folder including a Google Sheet API Credentials: Google Drive API, Google Sheets API and Google Translate API activated with OAuth2 credentials A free API key of pexels.com A google sheet with the columns Next Follow the sticky notes to set up the parameters inside each node and get ready to pump your learning skills. I have detailed the steps in a short tutorial ๐ ๐ฅ Check My Tutorial Notes This workflow can be used for any language. In the AI Agent prompt, you just need to replace the word pinyin with phonetic transcription. You can adapt the trigger to operate the workflow in the way you want. These operations can be performed by batch or triggered by Telegram, email, or webhook. If you want to learn more about how I used Anki flash cards to learn mandarin: ๐ท๏ธ Blog Article about Anki Flash Cards This workflow has been created with N8N 1.82.1 Submitted: March 17th, 2025
by David w/ SimpleGrow
This n8n workflow tracks user engagement in a specific WhatsApp group by capturing incoming messages via a Whapi webhook. It first filters messages to ensure they come from the correct group, then identifies the message typeโtext, emoji reaction, voice, or image. The workflow searches for the user in an Airtable database using their WhatsApp ID and increments their message count by one. It updates the Airtable record with the new count and the date of the last interaction. This automated process helps measure user activity and supports engagement initiatives like weekly raffles or rewards. The system is flexible and can be expanded to include more message types or additional actions. Overall, it provides a seamless way to encourage and track user participation in your WhatsApp community.
by Davide
This workflow is designed to intelligently route user queries to the most suitable large language model (LLM) based on the type of request received in a chat environment. It uses structured classification and model selection to optimize both performance and cost-efficiency in AI-driven conversations. It dynamically routes requests to specialized AI models based on content type, optimizing response quality and efficiency. Benefits Smart Model Routing**: Reduces costs by using lighter models for general tasks and reserving heavier models for complex needs. Scalability**: Easily expandable by adding more request types or LLMs. Maintainability**: Clear logic separation between classification, model routing, and execution. Personalization**: Can be integrated with session IDs for per-user memory, enabling personalized conversations. Speed Optimization**: Fast models like GPT-4.1 mini or Gemini Flash are chosen for tasks where speed is a priority. How It Works Input Handling: The workflow starts with the "When chat message received" node, which triggers the process when a chat message is received. The input includes the chat message (chatInput) and a session ID (sessionId). Request Classification: The "Request Type" node uses an OpenAI model (gpt-4.1-mini) to classify the incoming request into one of four categories: general: For general queries. reasoning: For reasoning-based questions. coding: For code-related requests. google: For queries requiring Google tools. The classification is structured using the "Structured Output Parser" node, which enforces a consistent output format. Model Selection: The "Model Selector" node routes the request to one of four AI models based on the classification: Opus 4 (Claude 4 Sonnet): Used for coding requests. Gemini Thinking Pro: Used for reasoning requests. GPT 4.1 mini: Used for general requests. Perplexity: Used for search (Google-related) requests. AI Processing: The selected model processes the request via the "AI Agent" node, which includes intermediate steps for complex tasks. The "Simple Memory" node retains session context using the provided sessionId, enabling multi-turn conversations. Output: The final response is generated by the chosen model and returned to the user. Set Up Steps Configure Trigger: Ensure the "When chat message received" node is set up with the correct webhook ID to receive chat inputs. Define Classification Logic: Adjust the prompt in the "Request Type" node to refine classification accuracy. Verify the output schema in the "Structured Output Parser" node matches expected categories (general, reasoning, coding, google). Connect AI Models: Link each model node (Opus 4, Gemini Thinking Pro, GPT 4.1 mini, Perplexity) to the "Model Selector" node. Ensure credentials (API keys) for each model are correctly configured in their respective nodes. Set Up Memory: Configure the "Simple Memory" node to use the sessionId from the input for context retention. Test Workflow: Send test inputs to verify classification and model routing. Check intermediate outputs (e.g., request_type) to ensure correct model selection. Activate Workflow: Toggle the workflow to "Active" in n8n after testing. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Jonathan
This workflow is part of an MSP collection, which is publicly available on GitHub. This workflow archives or unarchives a Clockify projects, depending on a Syncro status. Note that Syncro should be setup with a webhook via 'Notification Set for Ticket - Status was changed'. It doesn't handle merging of tickets, as Syncro doesn't support a 'Notification Set' for merged tickets, so you should change a ticket to 'Resolved' first before merging it. Prerequisites A Clockify account and credentials Nodes Webhook node triggers the workflow. IF node filters projects that don't have the status 'Resolved'. Clockify nodes get all projects that (don't) have the status 'Resolved', based on the IF route. HTTP Request nodes unarchives unresolved projects, and archives resolved projects, respectively.
by Jonathan
This workflow will take a timer entry from Clockify and submit it to a matching ticket in Syncro. It saves the time entry ID from Clockify and the time entry ID from Syncro into a Google Sheets. Then, it will check if a match already exists from a previous update and will update the same time entry if the description or time is changed in Clockify. There is a Set node with the name and Syncro IDs of technicians. If you have multiple technicians with the same name, this won't work for you. Likewise, if the name in Clockify doesn't exactly match what you put in the Set, it won't work. You also need to setup a webhook in Clockify set to trigger on "Time entry updated (anyone)" and pointed at your workflow. Configured this way, you can start and stop time entries at will and it won't do anything until you change the description. > This workflow is part of an MSP collection, The original can be found here: https://github.com/bionemesis/n8nsyncro
by n8n Team
Who this template is for This template is for everyone who needs to work with XML data a lot and wants to convert it to JSON instead. Use case Many products still work with XML files as their main language. Unfortunately, not every software still supports XML, as many switched to more modern storing languages such as JSON. This workflow is designed to handle the conversion of XML data to JSON format via a webhook call, with error handling and Slack notifications integrated into the process. How this workflow works Triggering the workflow: This workflow initiates upon receiving an HTTP POST request at the webhook endpoint specified in the "POST" node. The endpoint, designated as , can be accessed externally by sending a POST request to that URL. Data routing and processing: Upon receiving the POST request, the Switch node routes the workflow's path based on conditions determined by the content type of the incoming data or any encountered errors. The Extract From File and Edit Fields (Set) nodes manage XML input processing, adapting their actions according to the data's content type. XML to JSON conversion: The XML data extracted from the input is passed through the "XML" node, which performs the conversion process, transforming it into JSON format. Response handling: If the XML-to-JSON conversion is successful, a success response is sent back with a status of "OK" and the converted JSON data. If there are any errors during the XML-to-JSON conversion process, an error response is sent back with a status of "error" and an error message. Error handling: in case of an error during processing, the workflow sends a notification to a Slack channel designated for error reporting. Set up steps Set up your own in the Webhook node. While building or testing a workflow, use a test webhook URL. When your workflow is ready, switch to using the production webhook URL. Set credentials for Slack.
by Charles
Modern AI systems are powerful but pose privacy risks when handling sensitive data. Organizations need AI capabilities while ensuring: โ Sensitive data never leaves secure environments โ Compliance with regulations (GDPR, HIPAA, PCI, SOX) โ Real-time decision making about data sensitivity โ Comprehensive audit trails for regulatory review The Concept: Intelligent Data Classification + Smart Routing The goal of this concept is to build the foundations of the safe and compliant use of LLMs in Agentic workflows by automatically detecting sensitive data, applying sanitization rules, and intelligently routing requests through secure processing channels. This workflow will analyze the user's chat or webhook input and attempt to detect PII using the Enhanced PII Pattern Detector. If detected, the workflow will process that input via a series of Compliance, Auditing, and Security steps which log and sanitizes the request prior to any LLM being pinged. Why Multi-Tier Routing? Traditional systems use binary decisions (sensitive/not sensitive). Our 3-tier approach provides: โ Granular Security: Critical PII gets maximum protection โ Performance Optimization: Clean data gets full cloud capabilities โ Cost Efficiency: Expensive local processing only when needed โ User Experience: Maintains conversational flow across security levels Why Context-Aware Detection? Regex patterns alone miss contextual sensitivity. Our approach: โ Catches Intent: "Bank account" discussion is sensitive even without account numbers โ Reduces False Negatives: Medical discussions stay secure even without explicit medical IDs โ Proactive Protection: Identifies sensitive contexts before PII is shared โ Compliance Alignment: Matches how regulations actually define sensitive data Why Risk Scoring vs Binary Classification? Binary PII detection creates artificial boundaries. Risk scoring provides: โ Nuanced Decisions: Multiple low-risk patterns might aggregate to high risk โ Adaptive Thresholds: Organizations can adjust sensitivity based on their needs โ Better UX: Users aren't unnecessarily restricted for low-risk scenarios โ Audit Transparency: Clear reasoning for every routing decision Why Comprehensive Monitoring? Privacy systems require trust and verification: โ Compliance Proof: Audit trails demonstrate regulatory compliance โ Performance Optimization: Identify bottlenecks and improve efficiency โ Security Validation: Ensure no sensitive data leakage occurs โ Operational Insights: Understand usage patterns and system health How to Install: All that you will need for this workflow are credentials for your LLM providers such as Ollama, OpenRouter, OpenAI, Anthropic, etc. This workflow is customizable and allows the user to define the best LLM and storage/memory solutions for their specific use case.
by Sk developer
Automation Flow: Image to Image Using GPT Sora This flow automates the process of generating images using a provided prompt and reference image via the Sora GPT Image API from RapidAPI. The generated images are stored in Google Drive, and details are logged in Google Sheets. Nodes Overview 1. On Form Submission Type**: n8n-nodes-base.formTrigger Description**: This node triggers when a user submits the form containing the prompt and image URL. It ensures the form fields are filled in and ready for processing. Form Fields: Prompt: A text description of the desired image. Image URL: The URL of the reference image to be used. Webhook ID: Unique identifier for form submission. 2. HTTP Request to Sora GPT Image API Type**: n8n-nodes-base.httpRequest Description: Sends the prompt and image URL to the **Sora GPT Image API to generate a new image based on the provided inputs. API Endpoint: Sora GPT Image API (via RapidAPI) Method: POST Body Parameters: Prompt: User-provided text. Image URL: The reference image URL. Width & Height: Image size is set to 1024x1024. 3. Code (Base64 Conversion) Type**: n8n-nodes-base.code Description**: This node processes the base64-encoded image data returned from the API. It decodes and formats the image to be uploaded to Google Drive. Output: Converts the base64 string into a binary JPEG file. 4. Upload Image to Google Drive Type**: n8n-nodes-base.googleDrive Description: Uploads the generated image to **Google Drive, storing it in a designated folder. Authentication: Google Service Account. File Name: The image file name is dynamically set from the previous node. 5. Log Details to Google Sheets Type**: n8n-nodes-base.googleSheets Description: This node logs the **Prompt, Generated Image, and Generation Date into a Google Sheets document for tracking and auditing purposes. Columns Mapped: Prompt: The userโs input text. Image: The name of the generated image file. Generated Date: Date and time of image generation. Flow Summary User Submits Form: Triggered when the form with the prompt and image URL is submitted. Image Generation: The data is sent to the Sora GPT Image API from RapidAPI to generate the image. Image Processing: The generated image (base64 format) is decoded and saved as a file. Google Drive Upload: The image is uploaded to Google Drive for storage. Google Sheets Logging: All relevant details (Prompt, Image, Date) are saved in Google Sheets. Benefits Automated Image Creation: Quickly generate images using AI based on a simple prompt and reference image via **RapidAPI. Efficient Workflow**: The entire process from form submission to image generation and storage is automated, saving time and reducing manual work. Centralized Storage: Generated images are stored in **Google Drive, ensuring easy access and organization. Audit Trail: The details of each generated image are logged in **Google Sheets, making it easy to track, review, and manage past creations. Scalable and Reusable**: Can be adapted to multiple use cases, such as creative design, marketing materials, or social media content generation. Problems Solved Manual Image Editing**: Eliminates the need for manual image manipulation and creation, allowing for automatic generation based on user inputs. Disorganized File Storage: With automatic uploads to **Google Drive, the images are stored in a centralized and organized manner. Lack of Record-Keeping: By logging image generation details in **Google Sheets, there's always a record of past creations, improving tracking and management. Time-Consuming Processes**: The automation drastically reduces the time spent on manual tasks, allowing users to focus on other aspects of their work or creative processes. This flow simplifies the process of creating AI-generated images based on user inputs, leveraging the power of the Sora GPT Image API via RapidAPI, making it a powerful tool for creative, design, and marketing purposes.
by Encoresky
This workflow automates the process of handling conversation transcriptions and distributing key information across your organization. Here's what it does: Trigger: The workflow is initiated via a webhook that receives a transcription (e.g., from a call or meeting). Summarization & Extraction: Using AI, the transcription is summarized, and key information is extracted โ such as action items, departments involved, and client details. Department Notifications: The relevant summarized information is automatically routed to specific departments via email based on content classification. CRM Sync: The summarized version is saved to the associated contact or deal in HubSpot for future reference and visibility. *Multi-Channel Alerts: *The summary is also sent via WhatsApp and Slack to keep internal teams instantly informed, regardless of platform. Use Case: Ideal for sales, customer service, or operations teams who manage client conversations and want to ensure seamless cross-departmental communication, documentation, and follow-up. Apps Used: Webhook (Trigger) OpenAI (or other AI/NLP for summarization) HubSpot Email Slack WhatsApp (via Twilio or third-party provider)