by Raphael De Carvalho Florencio
What this template does Transforms provider documentation (URLs) into an auditable, enforceable multicloud security control baseline. It: Fetches and sanitizes HTML Uses AI to extract security requirements (strict 3-line TXT blocks) Composes enforceable controls** (strict 7-line TXT blocks with true-equivalence consolidation) Builds the final baseline* (TXT or JSON, see *Outputs) with a Technology: header Returns a downloadable artifact via webhook and can append/create the file in Google Drive Why it’s useful Eliminates manual copy-paste and produces a consistent, portable baseline ready for review, audit, or enforcement tooling—ideal for rapidly generating or refreshing baselines across cloud providers and services. Multicloud support The workflow is multicloud by design. Provide the target cloud in the request and run the same pipeline for: AWS, **Azure, GCP (out of the box) Extensible to other providers/services by adjusting prompts and routing logic How it works (high level) POST /create (Basic Auth) with { cloudProvider, technology, urls[] } Input validation → generate uuid → resolve Google Drive folder (search-or-create) Download & sanitize each URL AI pipeline: Extractor → Composer → Baseline Builder → (optional) Baseline Auditor Append/create file in Drive and return a downloadable artifact (TXT/JSON) via webhook Request (webhook) Method: POST URL: https://<your-n8n>/webhook/create Auth: Basic Auth Headers: Content-Type: application/json Example input (Postman/CLI) { "cloudProvider": "aws", "technology": "Amazon S3", "urls": [ "https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html", "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/S3/", "https://repost.aws/knowledge-center/secure-s3-resources" ] } Field reference cloudProvider (string, required) — case-insensitive. Supported: aws, azure, gcp. technology (string, required) — e.g., "Amazon S3", "Azure Storage", "Google Cloud Storage". urls (string\[], required) — 1–20 http(s) URLs (official/reputable docs). Optional (Google Drive destination): gdriveTargetId (string) — Google Drive folderId used for append/create. gdrivePath (string) — Path like "DefySec/Baselines" (folders are created if missing). gdriveTargetName (string) — Folder name to find/create under root. Optional (Assistant overrides): assistantExtractorId, assistantComposerId, assistantBaselineId, assistantAuditorId (strings) Resolution precedence Drive: gdriveTargetId → gdrivePath → gdriveTargetName → default folder. Assistants: explicit IDs above → dynamic resolution by name (expects 1_DefySec_Extractor, 2_DefySec_Control_Composer, 3_DefySec Baseline Builder, 4_DefySec_Baseline_Auditor). Validation Rejects empty urls or non-http(s) schemes; normalizes cloudProvider to aws|azure|gcp. Sanitizes fetched HTML (removes scripts/styles/headers) before AI steps. Outputs Primary:* downloadable *TXT** file controls_<technology>_<timestamp>.txt (via webhook). Composer outcomes:** if no groups to consolidate → NO_CONTROLS_TO_BE_CONSOLIDATED; if nothing valid remains → NO_CONTROLS_FOUND.  JSON path:* when the Builder stage is configured for *JSON-only** output (strict schema), the workflow returns a .json artifact and the Auditor validates it (see next section).  Techniques used (from the built-in assistants) Provider-aware extraction with strict TXT contract (3 lines):* Extractor limits itself to the declared provider/technology, outputs only Description/Reference/SecurityObjective, and applies a *reflexive quality check** before emitting.  Normalization & strict header parsing:** Composer normalizes whitespace/fences, requires the CloudProvider/Technology header, and ignores anything outside the exact 3-line block shape.  True-equivalence grouping & consolidation:* Composer groups *only** when intent, enforcement locus/mechanism, scope, and mode/setting all match—otherwise items remain distinct.  7-line enforceable control format:* Composer renders each (consolidated or unique) control in *exactly seven labeled lines** to keep results auditable and automatable.  Builder with JSON-only schema & technology inference:* Builder parses 7-line blocks, infers technology, consolidates true equivalents again if needed, and returns *pure JSON** matching a canonical schema (with counters in meta).  Self-evaluation loop (Auditor):* Auditor *unwraps transport, validates **schema & content, checks provider terminology/scope/automation, and returns either GOOD_ENOUGH or a JSON instruction set for the Builder to fix and re-emit—enabling reflective improvement.  Reference prioritization:** Across stages, official provider documentation is preferred in References (AWS/Azure/GCP).  Customization & extensions Prompt-reflective techniques:** keep (or extend) the Auditor loop to add more review passes and quality gates.  Compliance assistants:* add assistants to analyze/label controls for *HIPAA, PCI DSS, SOX** (and others), emitting mappings, gaps, and remediation notes. Implementation context:* feed internal implementation docs, runbooks, or *Architecture Decision Records (ADRs); use these as **grounding to generate or refine controls (works with local/self-hosted LLMs, too). Local/self-hosted LLMs:** swap OpenAI nodes for your on-prem LLM endpoint while keeping the pipeline. Provider-specific outputs:** extend the final stage to export Policy-as-Code or IaC snippets (Rego/Sentinel, CloudFormation Guard, Bicep/ARM, Terraform validations). Assistant configuration & prompts Full assistant configurations and prompts (Extractor, Composer, Baseline Builder, Baseline Auditor) are available here: https://github.com/followdrabbit/n8nlabs/tree/main/Lab03%20-%20Multicloud%20AI%20Security%20Control%20Baseline%20Builder/Assistants Security & privacy No hardcoded secrets in HTTP nodes; use n8n’s Credential Manager. Drive operations are optional and folder-scoped. For sensitive environments, switch to a local LLM and provide only sanitized/approved inputs. Quick test (curl) curl -X POST "https://<your-n8n>/webhook/create" \ -u "<user>:<pass>" \ -H "Content-Type: application/json" \ -d '{ "cloudProvider":"aws", "technology":"Amazon S3", "urls":[ "https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html" ] }' \ -OJ
by SIENNA
Automated AWS S3 / Azure / Google to local MinIO Object Backup with Scheduling What this workflow does ? This workflow performs automated, periodic backups of objects from an AWS S3 bucket, an Azure Container or a Google Storage Space to a MinIO S3 bucket running locally or on a dedicated container/VM/server. It can also work if the MinIO bucket is running on a remote cloud provider's infrastructure; you just need to change the URL and keys. Who's this intended for ? Storage administrators, cloud architects, or DevOps who need a simple and scalable solution for retrieving data from AWS, Azure or GCP. How it works This workflow uses the official AWS S3 API to list and download objects from a specific bucket, or the Azure BLOB one, then send them to MinIO using their version of the S3 API. Requirements None, just a source Bucket on your Cloud Storage Provider and a destination one on MinIO. You'll also need to get MinIO running. You're using Proxmox VE ? Create a MinIO LXC Container : https://community-scripts.github.io/ProxmoxVE/scripts?id=minio Need a Backup from another Cloud Storage Provider ? Need automated backup from another Cloud Storage Provider ? $\mapsto$ Check out our templates, we've done it with AWS, Azure, and GCP, and we even have a version for FTP/SFTP servers! For a dedicated source Cloud Storage Provider, please contact us ! $\odot$ These workflow can be integrated to bigger ones and modified to best suit your needs ! You can, for example, replace the MinIO node to another S3 Bucket from another Cloud Storage Provider (Backblaze, Wasabi, Scaleway, OVH, ...)
by Trung Tran
Automating AWS S3 Operations with n8n: Buckets, Folders, and Files Watch the demo video below: This tutorial walks you through setting up an automated workflow that generates AI-powered images from prompts and securely stores them in AWS S3. It leverages the new AI Tool Node and OpenAI models for prompt-to-image generation. Who’s it for This workflow is ideal for: Designers & marketers** who need quick, on-demand AI-generated visuals. Developers & automation builders* exploring *AI-driven workflows** integrated with cloud storage. Educators or trainers** creating tutorials or exercises on AI image generation. Businesses* looking to automate *image content pipelines** with AWS S3 storage. How it works / What it does Trigger: The workflow starts manually when you click “Execute Workflow”. Edit Fields: You can provide input fields such as image description, resolution, or naming convention. Create AWS S3 Bucket: Automatically creates a new S3 bucket if it doesn’t exist. Create a Folder: Inside the bucket, a folder is created to organize generated images. Prompt Generation Agent: An AI agent generates or refines the image prompt using the OpenAI Chat Model. Generate an Image: The refined prompt is used to generate an image using AI. Upload File to S3: The generated image is uploaded to the AWS S3 bucket for secure storage. This workflow showcases how to combine AI + Cloud Storage seamlessly in an automated pipeline. How to set up Import the workflow into n8n. Configure the following credentials: AWS S3 (Access Key, Secret Key, Region). OpenAI API Key (for Chat + Image models). Update the Edit Fields node with your preferred input fields (e.g., image size, description). Execute the workflow and test by entering a sample image prompt (e.g., “Futuristic city skyline in watercolor style”). Check your AWS S3 bucket to verify the uploaded image. Requirements n8n** (latest version with AI Tool Node support). AWS account** with S3 permissions to create buckets and upload files. OpenAI API key** (for prompt refinement and image generation). Basic familiarity with AWS S3 structure (buckets, folders, objects). How to customize the workflow Custom Buckets**: Replace the auto-create step with an existing S3 bucket. Image Variations**: Generate multiple image variations per prompt by looping the image generation step. File Naming**: Adjust file naming conventions (e.g., timestamp, user input). Metadata**: Add metadata such as tags, categories, or owner info when uploading to S3. Alternative Storage: Swap AWS S3 with **Google Cloud Storage, Azure Blob, or Dropbox. Trigger Options: Replace manual trigger with **Webhook, Form Submission, or Scheduler for automation. ✅ This workflow is a hands-on example of how to combine AI prompt engineering, image generation, and cloud storage automation into a single streamlined process.
by Mirza Ajmal
Description This powerful workflow automates the evaluation of new digital tools, websites, or platforms with the goal of assessing their potential impact on your business. By leveraging Telegram for user input, Apify for deep content extraction, advanced AI for contextual analysis, and Google Sheets for personalized data integration and record-keeping, this tool delivers clear, actionable verdicts that help you determine whether a tool is worth adopting or exploring further. Key Features and Workflow User-Friendly Input: Submit URLs of tools or websites directly through Telegram for quick and easy evaluation requests. Dynamic Content Extraction: The workflow retrieves detailed content from the submitted URLs using the Apify web crawler, capturing rich data for analysis. AI-Powered Cleaning & Analysis: Sophisticated AI models filter out noise, distill meaningful insights, and contextualize findings based on your business profile and goals stored in Google Sheets. Personalized Business Context: Integration with Google Sheets brings in your company’s specialization, current focus, and strategic objectives to tailor the analysis specifically to your needs. Structured Analysis Output: Receive a thorough, structured report including concise summaries, key considerations, business impact, benefits, risks, actionable insights, and an easy-to-understand final verdict on the tool’s relevance. Decision Support: The tool estimates effort, time to value, urgency, and confidence levels, enabling informed prioritization and strategic decision-making. Seamless Communication: Results are sent back via Telegram, ensuring you get timely and direct feedback without needing to leave your messaging app. Record Keeping & Tracking: All analyses and decisions are logged automatically into Google Sheets, creating a searchable knowledge base for ongoing reference and reporting. Setup Instructions for Key Nodes Telegram Trigger Node: Configure your Telegram bot API credentials here. Link the bot to your Telegram account to receive messages for URL submissions. URL Extraction Node: No credentials needed. This node extracts URLs from incoming messages for processing. Apify Web Crawler Node Setup Guide: Go to Apify's website, sign up for an account if you don’t have one, and get your API token from your profile’s API tokens section. Then, paste this token into the Apify Node’s API Key field in n8n. AI Cleaning and Analysis Nodes: Configure OpenRouter or compatible AI service API keys for content processing. Customize prompts or models if desired to align analysis style. Google Sheets Nodes: Connect using your Google account and provide access to the specified Google Sheet. Ensure sheets for Company Details and Analysis Results exist with proper columns as per this workflow. Telegram Reply Node: Use the Telegram bot API credentials to send analysis summaries and verdicts back to users. Access and Edit the Google Sheet You can access the Google Sheet used by this workflow here: Access the google sheet here Please make a copy of the sheet to your own Google Drive before connecting it with this workflow. This allows you to customize the sheets, update company information, and manage analysis results securely without affecting the original template. Extendibility Beyond manual URL submissions, you can enhance this workflow by scheduling automated daily checks of new product launches from platforms like Product Hunt. The system can proactively analyze emerging tools and deliver timely updates via Telegram, email, or other channels, helping you stay ahead of innovation effortlessly.
by System Admin
This node subscribes the formatted contact data to a specific KlickTipp list.. This node transforms the form data from Gravity Forms into the appropriate format required for the KlickTipp API.. Applie...
by Sascha
Having a seamless flow of customer data between your online store and your marketing platform is essential. By keeping your systems synchronized, you can ensure that your marketing campaigns are accurately targeted and effective. The integration between Shopify, a leading e-commerce platform, and Mautic, an open-source marketing automation system, is not available out-of-the-box. However, with a n8n workflow you can bridge this gap with. This template will help you: enhance accuracy in marketing lists by ensuring that subscription changes in Shopify are instantly updated in Mautic. improve compliance with data protection laws by respecting users' subscription preferences across platforms achieve integration without the need for additional plugins or software, minimizing complexity and potential points of failure. This template will demonstrate the follwing concepts in n8n: working with Shopify in n8n control flow with the IF node use Webhooks validate Webhooks with the Crypto node use the GraphQL node to call the Shopify Admin API The template consists of two parts: Sync Email Subscriptions from Shopify to Mautic Sync Email Subscriptions from Mautic to Shopify How to get started? Create a custom app in Shopify get the credentials needed to connect n8n to Shopify This is needed for the Shopify Trigger Create Shopify Acces Token API credentials n n8n for the Shopify trigger node Create Header Auth credentials: Use X-Shopify-Access-Token as the name and the Acces-Token from the Shopify App you created as the value. The Header Auth is neccessary for the GraphQL nodes. Enable the Mautic API under Configuration/API Settings, After the settings are saved you will have an additional entry in your settings menu to create API credentials for n8n Create Mautic credentials in n8n Please make sure to read the notes in the template. For a detailed explanation please check the corresponding video: https://youtu.be/x63rrh_yJzI
by Miquel Colomer
Do you want to avoid bounces in your Email Marketing campaigns? This workflow verifies emails using the uProc.io email verifier. You need to add your credentials (Email and API Key - real -) located at Integration section to n8n. Node "Create Email Item" can be replaced by any other supported service with email value, like Mailchimp, Calendly, MySQL, or Typeform. The "uProc" node returns a status per checked email (deliverable, undeliverable, spamtrap, softbounce,...). "If" node checks if "deliverable" status exists. If value is not present, you can mark email as invalid to discard bounces. If "deliverable" status is present, you can use email in your Email Marketing campaigns. If you need to know detailed indicators of any email, you can use the tool "Communication" > "Check Email Exists (Extended)" to get advanced information.
by Miquel Colomer
Do you want to discover company-related information to enrich a signup process? This workflow enriches any company by name using the uProc Get Company by Name tool. This tool combines Google Maps and emails research on the internet to return results. You get no results if the company has no presence on Google Maps. You need to add your credentials (Email and API Key - real -) located at Integration section to n8n. You can replace node "Create Company Item" with any other supported service returning Company names and countries, like Hubspot, Google Sheets, MySQL, or Typeform. You can set up the uProc node with several parameters: country: the country name you want to use. name: the name of the company you need to locate. Every "uProc" node returns the next fields per every located company: name: Contains the company's given name. email: Contains the company's given email. cif: Contains company's cif number. address: Contains company's formatted address. city: Contains the city location of the company. state: Contains province location of the company. county: Contains state location of the company country: Contains country location of the company zipcode: Contains zipcode code of the company phone: Contains phone number of the company website: Contains website of the company latitude: Contains latitude of the company longitude: Contains longitude of the company Next, you can save results to a CRM or Google Sheets, and prepare returned email or phone to launch an email or telemarketing campaign.
by Hans Blaauw
This flow is supported by a Chrome plugin created with Cursor AI. The idea was to create a Chrome plugin and a backend service in N8N to do chart analytics with OpenAI. It's a good sample on how to submit a screenshot from the browser to N8N. Who is it for? N8N developers who want to learn about using a Chrome plugin, an N8N webhook and OpenAI. What opportunity does it present? This sample opens up a whole range of N8N connected Chrome extensions that can analyze screenshots by using OpenAI. What this workflow does? The workflow contains: a webhook trigger an OpenAI node with GPT-4O-MINI and Analyze Image selected a response node to send back the Text that was created after analysing the screenshot. All this is needed to talk to the Chrome extension which is created with Cursor AI. The idea is to visit the tradingview.com crypto charts, click the Chrome plugin and get back analytics about the shown chart in understandable language. This is driven by the N8N flow. With the new image analytics capabilities of OpenAI this opens up a world of opportunities. Requirements/setup OpenAI API key Cursor AI installed The Chrome extension. Download The N8N JSON code. Download How to customize it to your needs? Both the Chrome extension and N8N flow can be adapted to use on other websites. You can consider: analyzing a financial screen and ask questions about the data shown analyzing other charts extending the N8N workflow with other AI nodes With AI and image analytics the sky is the limit and in some cases it saves you from creating complex API integrations. Download Chrome extension
by Angel Menendez
CallForge - AI-Powered Product Insights Processor from Sales Calls Automate product feedback extraction from AI-analyzed sales calls and store structured insights in Notion for data-driven product decisions. 🎯 Who is This For? This workflow is designed for: ✅ Product managers tracking customer feedback and feature requests. ✅ Engineering teams identifying usability issues and AI/ML-related mentions. ✅ Customer success teams monitoring product pain points from real sales conversations. It streamlines product intelligence gathering, ensuring customer insights are structured, categorized, and easily accessible in Notion for better decision-making. 🔍 What Problem Does This Workflow Solve? Product teams often struggle to capture, categorize, and act on valuable feedback from sales calls. With CallForge, you can: ✔ Automatically extract and categorize product feedback from AI-analyzed sales calls. ✔ Track AI/ML-related mentions to gauge customer demand for AI-driven features. ✔ Identify feature requests and pain points for product development prioritization. ✔ Store structured feedback in Notion, reducing manual tracking and increasing visibility across teams. This workflow eliminates manual feedback tracking, allowing product teams to focus on innovation and customer needs. 📌 Key Features & Workflow Steps 🎙️ AI-Powered Product Feedback Processing This workflow processes AI-generated sales call insights and organizes them in Notion databases: Triggers when AI sales call data is received. Detects product-related feedback (feature requests, bug reports, usability issues). Extracts key product insights, categorizing feedback based on customer needs. Identifies AI/ML-related mentions, tracking customer interest in AI-driven solutions. Aggregates feedback and categorizes it by sentiment (positive, neutral, negative). Logs insights in Notion, making them accessible for product planning discussions. 📊 Notion Database Integration Product Feedback** → Logs feature requests, usability issues, and bug reports. AI Use Cases** → Tracks AI-related discussions and customer interest in machine learning solutions. 🛠 How to Set Up This Workflow 1. Prepare Your AI Call Analysis Data Ensure AI-generated sales call insights are available. Compatible with Gong, Fireflies.ai, Otter.ai, and other AI transcription tools. 2. Connect Your Notion Database Set up Notion databases for: 🔹 Product Feedback (logs feature requests and bug reports). 🔹 AI Use Cases (tracks AI/ML mentions and customer demand). 3. Configure n8n API Integrations Connect your Notion API key** in n8n under “Notion API Credentials.” Set up webhook triggers** to receive AI-generated sales insights. Test the workflow** using a sample AI sales call analysis. 🔧 How to Customize This Workflow 💡 Modify Notion Data Structure – Adjust fields to align with your product team's workflow. 💡 Refine AI Data Processing Rules – Customize how feature requests and pain points are categorized. 💡 Integrate with Slack or Email – Notify teams when recurring product issues emerge. 💡 Expand with Project Management Tools – Sync insights with Jira, Trello, or Asana to create product tickets automatically. ⚙️ Key Nodes Used in This Workflow 🔹 If Nodes – Detect if product feedback, AI mentions, or feature requests exist in AI data. 🔹 Notion Nodes – Create and update structured feedback entries in Notion. 🔹 Split Out & Aggregate Nodes – Process multiple insights and consolidate AI-generated data. 🔹 Wait Nodes – Ensure smooth sequencing of API calls and database updates. 🚀 Why Use This Workflow? ✔ Eliminates manual sales call review for product teams. ✔ Provides structured, AI-driven insights for feature planning and prioritization. ✔ Tracks AI/ML mentions to assess demand for AI-powered solutions. ✔ Improves product development strategies by leveraging real customer insights. ✔ Scalable for teams using n8n Cloud or self-hosted deployments. This workflow empowers product teams by transforming sales call data into actionable intelligence, optimizing feature planning, bug tracking, and AI/ML strategy. 🚀
by Angel Menendez
CallForge - AI-Powered Marketing Insights Extraction from Sales Calls Automate marketing intelligence gathering from AI-analyzed sales calls and store insights in Notion. 🎯 Who is This For? This workflow is designed for: ✅ Marketing teams looking to extract trends and insights from sales conversations. ✅ Product managers who need direct customer feedback from sales calls. ✅ Revenue operations (RevOps) teams optimizing AI-driven call analysis. It streamlines AI-powered marketing intelligence, identifying customer pain points, competitor mentions, and recurring trends—all automatically stored in Notion. 🔍 What Problem Does This Workflow Solve? Manually reviewing sales call transcripts for marketing insights is time-consuming and inconsistent. With CallForge, you can: ✔ Extract key marketing insights from AI-analyzed sales calls. ✔ Track recurring discussion topics across multiple conversations. ✔ Generate actionable marketing recommendations for strategy and content. ✔ Store structured insights in Notion for seamless access. This automation eliminates manual work and ensures marketing teams get data-driven insights from real customer conversations. 📌 Key Features & Workflow Steps 🎙️ AI-Driven Marketing Insights Processing This workflow processes AI-generated sales call insights and organizes them in Notion databases: Triggers when AI sales call data is received. Identifies marketing-related data (trends, customer pain points, competitor mentions). Extracts key marketing insights, categorizing product discussions and recurring topics. Logs trends across multiple calls, ensuring marketing teams spot recurring themes. Processes actionable insights, capturing marketing strategy recommendations. Stores all findings in Notion, enabling structured, searchable insights. 📊 Notion Database Integration Marketing Insights** → Logs key trends and product mentions from sales calls. Recurring Topics** → Tracks frequently discussed themes across calls. Actionable Recommendations** → Stores AI-generated recommendations for marketing teams. 🛠 How to Set Up This Workflow 1. Prepare Your AI Call Analysis Data Ensure AI-generated sales call insights are available. Compatible with Gong, Fireflies.ai, Otter.ai, and other AI transcription tools. 2. Connect Your Notion Database Set up Notion databases for: 🔹 Marketing Insights (logs trends and product mentions) 🔹 Recurring Topics (tracks frequently discussed customer concerns) 🔹 Actionable Recommendations (stores marketing strategy insights) 3. Configure n8n API Integrations Connect your Notion API key** in n8n under “Notion API Credentials.” Set up webhook triggers** to receive AI-generated sales insights. Test the workflow** using a sample AI sales call analysis. 🔧 How to Customize This Workflow 💡 Modify Notion Data Structure – Adjust fields to match marketing strategy needs. 💡 Refine AI Data Processing Rules – Customize what insights are extracted and logged. 💡 Integrate with Slack or Email – Notify teams when key marketing trends emerge. 💡 Expand CRM Integration – Sync insights with HubSpot, Salesforce, or Pipedrive. CallForge - 01 - Filter Gong Calls Synced to Salesforce by Opportunity Stage CallForge - 02 - Prep Gong Calls with Sheets & Notion for AI Summarization CallForge - 03 - Gong Transcript Processor and Salesforce Enricher CallForge - 04 - AI Workflow for Gong.io Sales Calls CallForge - 05 - Gong.io Call Analysis with Azure AI & CRM Sync CallForge - 06 - Automate Sales Insights with Gong.io, Notion & AI CallForge - 07 - AI Marketing Data Processing with Gong & Notion CallForge - 08 - AI Product Insights from Sales Calls with Notion ⚙️ Key Nodes Used in This Workflow 🔹 If Nodes – Detect if marketing insights, recurring topics, or recommendations exist in AI data. 🔹 Notion Nodes – Create and update entries in Notion databases. 🔹 Split Out & Aggregate Nodes – Process multiple insights and consolidate AI-generated data. 🔹 Wait Nodes – Ensure smooth sequencing of API calls and database updates. 🚀 Why Use This Workflow? ✔ Eliminates manual sales call review for marketing teams. ✔ Provides structured, AI-driven insights for marketing and product strategy. ✔ Tracks competitor mentions and customer pain points automatically. ✔ Improves content marketing and campaign planning with real customer insights. ✔ Scalable for teams using n8n Cloud or self-hosted deployments. This workflow empowers marketing teams by transforming sales call data into actionable intelligence, streamlining strategy, content planning, and competitor analysis. 🚀
by tanaypant
This workflow is the third of three. You can find the other workflkows here: Incident Response Workflow - Part 1 Incident Response Workflow - Part 2 Incident Response Workflow - Part 3 We have the following nodes in the workflow: Webhook node: This trigger node listens to the event when the Resolve button is clicked. PagerDuty node: This node changes the status of the incident report from Acknowledged to Resolved in PagerDuty. Jira Software node: This node moves the incident issue to Done. Mattermost node: This node publishes a message in the auxiliary channel mentioning that the incident has been marked as resolved in PagerDuty and Jira. Mattermost node: This node publishes a message in the specified Incidents channel that the incident has been resolved by the on-call team.