by Aditya Gaur
Who is this template for? This template is designed for developers, DevOps engineers, and automation enthusiasts who want to streamline their GitLab merge request process using n8n, a low-code workflow automation tool. It eliminates manual intervention by automating the merging of GitLab branches through API calls. How it works ? Trigger the workflow: The workflow can be triggered by a webhook, a scheduled event, or a GitLab event (e.g., a new merge request is created or approved). Fetch Merge Request Details: n8n makes an API call to GitLab to retrieve merge request details. Check Merge Conditions: The workflow validates whether the merge request meets predefined conditions (e.g., approvals met, CI/CD pipelines passed). Perform the Merge: If all conditions are met, n8n sends a request to the GitLab API to merge the branch automatically. Setup Steps 1. Prerequisites An n8n instance (Self-hosted or Cloud) A GitLab personal access token with API access A GitLab repository with merge requests enabled 2. Create the n8n Workflow Set up a trigger: Choose a trigger node (Webhook, Cron, or GitLab Trigger). Fetch merge request details: Add an HTTP Request node to call GET /merge_requests/:id from GitLab API. Validate conditions: Check if the merge request has necessary approvals. Ensure CI/CD pipelines have passed. Merge the request: Use an HTTP Request node to call PUT /merge_requests/:id/merge API. 3. Test the Workflow Create a test merge request. Check if the workflow triggers and merges automatically. Debug using n8n logs if needed. 4. Deploy and Monitor Deploy the workflow in production. Use n8n’s monitoring features to track execution. This template enables seamless GitLab merge automation, improving efficiency and reducing manual work! Note: Never hard code API token or secret in your https request.
by Yang
Who is this for? This workflow is for digital marketers, small business owners, lead generation agencies, and VAs who need a scalable way to find and store local business leads using AI. It’s especially useful for teams that want to enrich leads with real-time news insights and save the structured data to Airtable. What problem is this workflow solving? Manually researching local businesses and staying up to date with relevant news is time-consuming and inefficient. This automation eliminates that burden by using Dumpling AI chat agents to generate leads and context, GPT-4o to summarize, and Airtable to store everything in one place. What this workflow does This AI workflow listens for a manual trigger in n8n and executes the following steps: Extracts local business leads using a Local Business Agent from Dumpling AI. Pulls current news related to the business type or location using a News Agent from Dumpling AI. Uses GPT-4o to combine both responses into a human-readable summary. Extracts structured lead data like name, category, and city. Saves the summary and lead data into Airtable for easy follow-up. Setup 1. Create AI Agents in Dumpling AI Sign in at Dumpling AI Create two separate agents: Local Business Agent: Designed to respond with structured lists of businesses by location and category. News Agent: Designed to fetch relevant recent news and summaries about a specific industry or region. After setting up each agent, copy the Agent Key from Dumpling AI. These keys will be required in the headers of your HTTP Request nodes in n8n. 2. Manual Trigger This workflow begins with a manual trigger inside n8n, Which is the When chat message is recieved. This makes it easy to test and reuse, especially during setup. 3. Get Local Business Data from Dumpling AI The first HTTP Request node sends a prompt like List 5 top real estate companies in Atlanta with full address and services. Include your Local Business Agent Key in the x-agent-key header. The response will return a structured list of business leads. 4. Get News Context from Dumpling AI The second HTTP Request node sends a prompt such as Give me the latest news related to the real estate market in Atlanta. Use your News Agent Key in the header. This fetches a brief set of recent news summaries relevant to the businesses being researched. 5. Use GPT-4o to Merge and Summarize The GPT node combines the list of businesses and news into one coherent summary. You can modify the prompt to output in paragraph format, bullet points, or structured notes. 6. Save Lead to Airtable The Airtable node sends all structured fields into your selected base and table. Be sure to connect your Airtable account and confirm the columns match exactly. How to customize this workflow Replace the prompt inside the HTTP node to focus on different types of businesses or cities. Expand the GPT output to include additional lead info like websites, phone numbers, or emails if the agent includes them. Add a webhook trigger to allow this flow to be run via a chatbot, external app, or button. Link to HubSpot or another CRM to sync the leads automatically. Duplicate the process to run for multiple industries in parallel. Final Notes You must create and configure your Dumpling AI agents first before running this workflow. The Agent Keys from Dumpling AI are required in both HTTP Request nodes. This flow is modular and flexible, ready for deeper CRM integrations. The manual trigger is great for testing, but you can add a Webhook node to automate it. This workflow helps you launch an intelligent lead gen process that combines location-targeted business discovery, AI-generated insights, and structured CRM-friendly output, all powered by Dumpling AI and OpenAI.
by Yaron Been
Workflow Overview This cutting-edge n8n automation is a sophisticated market research and intelligence gathering tool designed to transform web content discovery into actionable insights. By intelligently combining web crawling, AI-powered filtering, and smart summarization, this workflow: Discovers Relevant Content: Automatically crawls target websites Identifies trending topics Extracts comprehensive article details Intelligent Content Filtering: Applies custom keyword matching Filters for most relevant articles Ensures high-quality information capture AI-Powered Summarization: Generates concise, meaningful summaries Extracts key insights Provides quick, digestible information Seamless Delivery: Sends summaries directly to Slack Enables instant team communication Facilitates rapid information sharing Key Benefits 🤖 Full Automation: Continuous market intelligence 💡 Smart Filtering: Precision content discovery 📊 AI-Powered Insights: Intelligent summarization 🚀 Instant Delivery: Real-time team updates Workflow Architecture 🔹 Stage 1: Content Discovery Scheduled Trigger**: Daily market research FireCrawl Integration**: Web content crawling Comprehensive Site Scanning**: Extracts article metadata Captures full article content Identifies key information sources 🔹 Stage 2: Intelligent Filtering Keyword-Based Matching** Relevance Assessment** Custom Domain Optimization**: AI and technology focus Startup and innovation tracking 🔹 Stage 3: AI Summarization OpenAI GPT Integration** Contextual Understanding** Concise Insight Generation**: 3-point summary format Captures essential information 🔹 Stage 4: Team Notification Slack Integration** Instant Information Sharing** Formatted Insight Delivery** Potential Use Cases Market Research Teams**: Trend tracking Innovation Departments**: Technology monitoring Startup Ecosystems**: Competitive intelligence Product Management**: Industry insights Strategic Planning**: Rapid information gathering Setup Requirements FireCrawl API Web crawling credentials Configured crawling parameters OpenAI API GPT model access Summarization configuration API key management Slack Workspace Channel for insights delivery Appropriate app permissions Webhook configuration n8n Installation Cloud or self-hosted instance Workflow configuration API credential management Future Enhancement Suggestions 🤖 Multi-source crawling 📊 Advanced sentiment analysis 🔔 Customizable alert mechanisms 🌐 Expanded topic tracking 🧠 Machine learning refinement Technical Considerations Implement robust error handling Use exponential backoff for API calls Maintain flexible crawling strategies Ensure compliance with website terms of service Ethical Guidelines Respect content creator rights Use data for legitimate research Maintain transparent information gathering Provide proper attribution Workflow Visualization [Daily Trigger] ⬇️ [Web Crawling] ⬇️ [Content Filtering] ⬇️ [AI Summarization] ⬇️ [Slack Delivery] Connect With Me Ready to revolutionize your market research? 📧 Email: Yaron@nofluff.online 🎥 YouTube: @YaronBeen 💼 LinkedIn: Yaron Been Transform your information gathering with intelligent, automated workflows! #AIResearch #MarketIntelligence #AutomatedInsights #TechTrends #WebCrawling #AIMarketing #InnovationTracking #BusinessIntelligence #DataAutomation #TechNews
by Lukas Kunhardt
Who is this for? This template is for any website owner, digital agency, or compliance officer operating within the European Union. It's designed for users who need to comply with the upcoming European Accessibility Act (EAA) but may not have deep technical or legal expertise. Disclaimer This workflow uses an npm package called "cheerio" to work with the specified URLs HTML code. Installing packages is only possible in self hosting. What problem is this workflow solving? / Use Case Starting June 28, 2025, the European Accessibility Act (EAA) mandates that most websites offering products or services in the EU must be accessible and publish a formal Accessibility Statement. Manually creating this legal document is complex, requiring both a technical site analysis and knowledge of specific legal requirements. This workflow automates the generation of a compliant first draft, saving significant time and effort. What this workflow does After you input your details (like website URL and API key) in a central configuration node, this workflow automatically: Scans your live website for accessibility issues using the powerful WAVE API. Processes the scan results to identify the main problem areas. Instructs a Google Gemini AI agent with a specialized legal prompt based on the European Accessibility Act. Generates a formal Accessibility Statement in your desired language. Saves the statement as an .html file and sends it to you as an email attachment. Setup This workflow is designed for a quick setup: Configure All Variables: Click the 'CHANGE THESE: dependencies' node. This is your central control panel. Fill in all the values, including your WAVE API Key, the URL to analyze, company details, and desired output language. Set Up Credentials: You will need to connect your Google accounts for the workflow to run. Gemini: Click the 'gemini 2.5 pro' node, click the gear icon (⚙️) next to the "Credential" field, and connect your Google Gemini API credentials. Gmail: Click the 'Send report by email' node and connect your Gmail account to allow sending the final report. Activate & Execute: Make sure the workflow is active in the top-right corner, then click 'Execute Workflow' to run your first analysis. How to customize this workflow to your needs This template is a great starting point for any EU country. Here's how to adapt it: Localize for Your Country (Important!):* The generated statement contains a placeholder for the "Enforcement Procedure". You *must* edit the prompt in the *'Accessibility Statement Generator'** node to replace this placeholder with the name and link to your specific country's official enforcement body. Change the AI:** Swap the Google Gemini node for any other AI model, like OpenAI or Anthropic Claude, by replacing the node and connecting it to the agent. Change the Trigger:* Replace the *'When clicking ‘Execute workflow’'** node with a Form Trigger or Webhook Trigger to run this workflow based on external inputs, for example, to offer this analysis as a service to your clients.
by Agent Studio
Automatically store Retell transcripts in Google Sheets/Airtable/Notion from webhook Overview This workflow stores the results of a Retell voice call (transcript, analysis, etc.) once it has ended and been analyzed. It listens for call_analyzed webhook events from Retell and stores the data in Airtable, Google Sheets, and Notion (choose based on your stack). Useful for anyone building Retell agents who want to keep a detailed history of analyzed calls in structured tools. Who is it for For builders of Retell's Voice Agents who want to store call history and essential analytic data. Prerequisites Have a Retell AI Account Create a Retell agent Associate a phone number with your Retell agent Set up one of the following: An Airtable base and table (example: "Transcripts") A Google Sheet with a “Transcripts” tab A Notion database with columns to match the transcript fields Templates: Airtable Google Sheets Notion How it works Receives a webhook POST request from Retell when a call has been analyzed. Filters out any event that is not call_analyzed (Retell sends webhooks for call_started, call_ended and call_analyzed) Extracts useful fields like: Call ID, start/end time, duration, total cost Transcript, summary, sentiment Stores this data in your preferred tool: Airtable Google Sheets Notion How to use it Copy the webhook URL (e.g., https://your-instance.app.n8n.cloud/webhook/poc-retell-analysis) and paste it in your Retell agent under "Webhook settings" then "Agent Level Webhook URL". Make sure your Airtable, Google Sheet, or Notion databases are correctly configured to receive the fields. After each call, once Retell finishes the analysis, this workflow will automatically log the results. Extension If you use any "Post-Call Analysis" fields, you can add columns to your Airtable, Google Sheet, or Notion database. Then fetch the data from the call.call_analysis.custom_analysis_data object. Additional Notes Phone numbers are extracted depending on the call direction (from_number or to_number). Cost is converted from cents to dollars before saving. Dates are converted from timestamps to local ISO strings. You can remove any of the outputs (Airtable, Google Sheets, Notion) if you're only using one. 👉 Reach out to us if you're interested in analysing your Retell Agent conversations.
by Niranjan G
Who is this for? NVD (National Vulnerability Database) data is essential for security analysts, vulnerability managers, and DevSecOps professionals who need to perform both CVE lookups and monitor historical change logs. This workflow helps streamline those efforts by providing structured outputs for audit, triage, or compliance tracking purposes. 📝 Note: While this example uses Google Sheets as the destination, you can easily modify the final destination node (e.g., send to Slack, email, database, etc.) based on your specific automation needs.? What problem is this solving? Security teams often manually look up CVE data and track changes across multiple tools. This process is inefficient and error-prone. This workflow automates the CVE lookup and historical change tracking by logging enriched vulnerability data into Google Sheets in real-time. What this workflow does This workflow is designed for CVE API lookup and change history tracking. In many vulnerability automation pipelines, it is essential to determine not only the metadata of a CVE but also how it has evolved over time. Based on the operational need—whether it's enrichment, risk scoring, or remediation validation—this workflow becomes particularly handy in surfacing both current and historical CVE data. This template performs the following actions: Accepts incoming webhook requests containing a CVE ID Queries the NVD CVE Lookup API to fetch vulnerability metadata Queries the NVD CVE History API to retrieve all historical changes Flattens both datasets into a sheet-compatible structure Appends vulnerability metadata to one sheet and change history to another within the same Google Spreadsheet Setup 🔑 Request an NVD API Key To request an NVD API Key, please provide your organization name, a valid email address, and indicate your organization type at NVD API Key Request. You must scroll to the end of the Terms of Use Agreement and check "I agree to the Terms of Use" to obtain an API Key. After submission, you will receive a single-use hyperlink via email to activate and view your API Key. If not activated within seven days, a new request must be submitted. 📊 API Rate Limits Without an API key, you're limited to 5 requests per 30-second window. With an API key, you’re allowed up to 50 requests in the same period. To prevent request throttling, it's recommended to introduce slight delays between consecutive API calls in production setups. Clone or import this workflow into your n8n instance. Set up the following credentials: Google Sheets OAuth2 NVD API Key (via HTTP Header Auth) The workflow logs data to a Google Sheet titled NVD Database, with Sheet 1 named CVE Lookup and Sheet 2 named CVE History. Trigger each workflow using the respective webhook URL, appending ?cveId=CVE-XXXX-XXXX as a query parameter. 🔍 Example Webhook Request (CVE Change History) You can test this workflow with the following example: GET https://your-domain.com/webhook/cve-history?cveId=CVE-2023-34362 How to customize this workflow Use the Edit Fields node (optional) to centralize configuration like sheet name or query input Extend the CVE flattening logic to include more nested metadata if needed Integrate notification systems (e.g., Slack or email) by branching from the processing nodes Modify webhook paths for better endpoint organization 🔐 Production Security Tips Use HTTP Header Auth on the webhook for secure access > ⚠️ This template uses webhooks and NVD API access with authentication headers. This template uses two flows: Webhook 1:** NVD CVE Lookup — Lookup CVE vulnerability metadata from NVD and sync to Google Sheet Webhook 2:** NVD CVE Change History — Track change history for CVEs via NVD and log each update Each flow: Hits NVD’s respective endpoint Uses custom JS Code node to flatten the nested JSON Syncs data to dedicated Google Sheet tabs 🧩 4 nodes: Webhook → API Call → Parse → Sheet Sync Make sure both flows are activated and webhooks exposed for external access. Based on your needs, ensure you have a secure setup—whether hosted internally or in a cloud environment—when running n8n in production.
by Mike Russell
Boost engagement on your Discord server by automatically sharing new YouTube videos along with AI generated summaries of their content. This workflow is ideal for content creators and community managers looking to provide value and spark interest through summarized content, making it easier for community members to decide if a video is of interest to them. Watch this video tutorial to learn more about the template. How it works RSS Feed Trigger**: Monitors your YouTube channel for new uploads using the RSS feed. Video Captions Retrieval**: Fetches video captions using the YouTube API to get detailed content data. AI Summary Generation**: Uses an AI model to generate concise summaries from the video captions, highlighting key points. Discord Notification**: Posts video announcements along with their AI generated summaries to a specified Discord channel using a webhook. Set up steps Configure YouTube RSS Feed: Set up the RSS feed node to detect new video uploads. Add your YouTube channel ID to the URL in the first node: https://www.youtube.com/feeds/videos.xml?channel_id=YOUR_CHANNEL_ID. Connect OpenAI Account: To enable AI summary generation, connect your OpenAI account in n8n. Set Up Discord Webhook: Create a webhook in your Discord server and configure it in the Discord node. Design the Message: Format the Discord message as you like to include the video title, link, and the AI generated summary. Example This template empowers you to maintain a highly engaging Discord community, ensuring members receive not only regular updates but also valuable insights into each video's content without needing to watch immediately.
by Vitali
Template Description This n8n workflow template allows you to create a masked email address using the Fastmail API, triggered by a webhook. This is especially useful for generating disposable email addresses for privacy-conscious users or for testing purposes. Workflow Details: Webhook Trigger: The workflow is initiated by sending a POST request to a specific webhook. You can include state and description in your request body to customize the masked email's state and description. Session Retrieval: The workflow makes an HTTP request to the Fastmail API to retrieve session information. It uses this data to authenticate further requests. Create Masked Email: Using the retrieved session data, the workflow sends a POST request to Fastmail's JMAP API to create a masked email. It uses the provided state and description from the webhook payload. Prepare Output: Once the masked email is successfully created, the workflow extracts the email address and attaches the description for further processing. Respond to Webhook: Finally, the workflow responds to the original POST request with the newly created masked email and its description. Requirements: Fastmail API Access**: You will need valid API credentials for Fastmail configured with HTTP Header Authentication. Authorization Setup**: Optionally set up authorization if your webhook is exposed to the internet to prevent misuse. Custom Webhook Request**: Use a tool like curl or create a shortcut on macOS/iOS to send the POST request to the webhook with the necessary JSON payload, like so: curl -X POST -H 'Content-Type: application/json' https://your-n8n-instance/webhook/87f9abd1-2c9b-4d1f-8c7f-2261f4698c3c -d '{"state": "pending", "description": "my mega fancy masked email"}' This template simplifies the process of integrating masked email functionality into your projects or workflows and can be extended for various use cases. Feel free to use the companion shortcut I've also created. Please update the authorization header in the shortcut if needed. https://www.icloud.com/shortcuts/ac249b50eab34c04acd9fb522f9f7068
by TechDennis
Edit an existing image with OpenAI ImageGen1 via API Request Transform your creative pipeline by letting n8n call OpenAI ImageGen1’s edit image endpoint, automatically replacing or augmenting parts of any image you supply and returning a brand-new version in seconds. Designers, marketers, and product teams can eliminate repetitive manual edits and test more variations, faster. Who is this for? Content creators who need quick, on-brand image tweaks Marketers running A/B visual tests at scale Developers exploring the new ImageGen1 API inside low-code automations Use case / problem solved Opening design software to mask, fill, or swap objects is slow and error-prone. This workflow feeds an input image plus a prompt to OpenAI ImageGen1, receives the edited output, and passes it on to any service you like—perfect for bulk-editing product shots, social visuals, or UI mocks. What this workflow does Read or receive the source image (Webhook → Binary Data). Call OpenAI ImageGen1 with an HTTP Request node, sending the image and edit prompt. Parse the JSON response to capture the returned image URL. Download & hand off the edited file (e.g., upload to S3, post to Slack, or store in Drive). Setup Add your OpenAI API key in the API KEY node. Follow the notes on the workflow for more information. (Optional) Point the final node to your preferred storage or chat tool. > 📝 A sticky note in the workflow summarizes these steps and links to the OpenAI documentation. How to customize this workflow Trigger alternatives**: Replace the Chat with Google Drive, Airtable, etc. Chained edits**: Loop the output back for successive prompts. Conditional flows**: Add an If node to branch actions by image size or category. With renamed nodes, color-coded sticky notes, and a concise setup guide, you’ll be editing images via OpenAI ImageGen1 in under five minutes—no code, maximum creativity.
by Robert Breen
n8n Workflow: OpenAI DALL·E 2 Image Generation & Google Drive Upload Description This n8n workflow automates the process of generating multiple AI-created images from a single prompt using OpenAI's DALL·E 2, then uploads the results directly to a Google Drive folder. It includes a loop to produce several image variations for the same prompt, making it ideal for creative projects, marketing materials, or content experimentation. Step-by-Step Setup Instructions 1. Prepare Your API Keys OpenAI API Key** Sign up or log in at https://platform.openai.com/ Go to API Keys and create a new one. Copy and store this securely — you'll need it in n8n. Google Drive API** Go to https://console.cloud.google.com/ Create a project and enable Google Drive API. Create OAuth 2.0 credentials and set the redirect URI to your n8n OAuth redirect (found in your n8n Google Drive node setup). Connect your Google account when adding credentials in n8n. 2. Workflow Nodes Overview Manual Trigger – Starts the workflow manually. Set Image Prompt – Stores the prompt text and base file name (e.g., “Make an image of an attractive woman standing in New York City”). Duplicate Rows (Code Node) – Creates multiple "runs" of the same prompt for variation. Loop Over Items – Processes each variation one at a time. Generate an image (OpenAI DALL·E 2) – Sends the prompt to OpenAI and retrieves an image. Upload to Google Drive – Saves each generated image to your chosen Google Drive folder. 3. Building the Workflow in n8n Step 1 — Manual Trigger Add a Manual Trigger node to start the workflow manually when testing. Step 2 — Set Image Prompt Add a Set node with two fields: Prompt → The image description text. Name → The base name for the saved file. Example: | Name | Value | |--------|---------------------------------------------------------------| | Prompt | Make an image of an attractive woman standing in New York City | | Name | woman-nyc | Step 3 — Duplicate Rows (Code Node) Use this JavaScript to create three copies of the prompt (run 1, run 2, run 3): const original = items[0].json; return [ { json: { ...original, run: 1 } }, { json: { ...original, run: 2 } }, { json: { ...original, run: 3 } }, ]; Step 4 — Loop Over Items Insert a Split in Batches node and set the batch size to 1. This ensures each prompt variation runs through the image generation process individually. Connect this node so it runs after the Duplicate Rows node. Step 5 — Generate Image Add the OpenAI Image Generation node and configure it as follows: Model**: dall-e-2 Prompt**: ={{ $json.Prompt }} Leave other options at their defaults unless you want to specify image size or style. Connect your OpenAI API credentials created in Step 1. This node will send the current prompt in the batch to OpenAI's DALL·E 2 model and return an AI-generated image. Step 6 — Upload to Google Drive Add a Google Drive node and configure it to store the generated image: File Name**: ={{ $('Set Image Prompt').item.json.Name }} - {{ $('Duplicate Rows').item.json.run }} Folder ID**: Select the target Google Drive folder where images should be saved. Connect your Google Drive OAuth2 API credentials. The node will upload each generated image to your chosen Google Drive location, with a unique filename for each variation. Running the Workflow Execute the workflow manually. The process will: Loop through each prompt variation. Generate an image using OpenAI DALL·E 2. Upload the image to Google Drive with a unique name. You will find all generated images in the selected Google Drive folder. Customization Tips Change the number of variations by editing the Duplicate Rows code. Adjust the prompt dynamically from other data sources like Google Sheets, webhooks, or forms. Schedule the workflow to run at specific times or trigger it via an API call. Created by Robert A. – Ynteractive Website: https://ynteractive.com Email: robert@ynteractive.com
by OneClick IT Consultancy P Limited
Automate Customer Feedback Analysis with Google Sheets, WhatsApp, and Email Introduction: Drowning in Data, Starving for Insight? Imagine this: Your team launches a new feature. Feedback starts pouring in emails, support tickets, social media mentions, and survey responses. You know gold is buried in there, but manually reading, tagging, and summarising hundreds, maybe thousands, of comments? It takes days, maybe weeks. By the time you have a clear picture, the moment might have passed. Sounds exhausting, right? What if you could have an AI assistant tirelessly working 24/7, instantly analysing every piece of feedback the moment it arrives? This isn't science fiction anymore. AI-powered automation can transform this slow, manual chore into a real-time insight engine, giving you the pulse of your customer base almost instantly. Let's explore how. What's the Goal? Understanding the Workflow Objective The core challenge is transforming raw, unstructured customer feedback into actionable intelligence quickly and efficiently. The Problem: Manual Overload: Sifting through vast amounts of feedback manually is incredibly time-consuming and prone to human error or bias. Delayed Insights: The lag between receiving feedback and understanding it means missed opportunities and slow responses to critical issues. Inconsistent Analysis: Different team members might interpret or categorize feedback differently, leading to unreliable trend spotting. The AI Solution: Automated Data Collection: Connects directly to feedback sources (surveys, social media, review sites, helpdesks). AI-Powered Analysis: Uses Large Language Models (LLMs) like GPT-4 or Claude to analyze sentiment, extract key topics, and summarize comments. Intelligent Categorization: Automatically tags feedback based on predefined or dynamically identified themes (e.g., "bug report," "feature request," "pricing issue"). Real-time Reporting: Pushes structured insights into dashboards, databases, or triggers notifications for immediate awareness. Outcome: You move from reactive problem-solving based on stale data to proactive, strategic decisions driven by a near real-time understanding of customer sentiment and needs. Why Does It Matter? Achieving 100X Productivity and Efficiency Look, automating feedback isn't just about saving time; it's about scaling your ability to listen and respond smarter, not harder. When you leverage AI, the gains aren't incremental - they're exponential. Here’s why this is a game changer: Blazing Speed: Analyse feedback 100x Faster (or more!) than manual methods. Insights appear in minutes or hours, not days or weeks. Unhuman Scalability: Process virtually unlimited volumes of feedback without needing to scale your human team proportionally. AI doesn't get tired or bored. Consistent Accuracy: AI applies analysis rules consistently, reducing human bias and ensuring reliable categorisation and sentiment scoring over time. Proactive Trend Spotting: Identify emerging issues or popular requests much earlier by analysing aggregated data automatically. Spot patterns humans might miss. Free Up Your Team: Let your talented team focus on acting on insights – improving products, fixing issues, engaging customers – instead of drowning in data entry. How It Works: AI Automation Step by Step Getting this set up is more straightforward than you might think, especially with tools like n8n acting as the central hub. Automated Feedback Triggering CRM/Website Event Node Trigger feedback requests after: Purchases (eCommerce) Support ticket resolution Feature usage (SaaS) Time-Based Node Schedule recurring NPS surveys Customer health check-ups Chat App Node (WhatsApp/Telegram/Messenger) Send conversational feedback prompts: "How was your recent experience with [specific interaction]?" Multi-Channel Feedback Collection Email Node (SendGrid/Mailchimp) Send personalized feedback requests Embed 1-5 rating widgets SMS Node (Twilio) Short mobile surveys: "Reply 1-5: How satisfied with your purchase?" Webhook Node Capture in-app feedback Process chatbot responses Social Media Node Monitor Twitter/X, Instagram mentions Analyze comments for unsolicited feedback AI-Powered Real-Time Analysis OpenAI/ChatGPT Node (Sentiment Analysis) Prompt: "Analyze sentiment (positive/neutral/negative) and key themes from: [customer feedback]" Output fields: Sentiment score (1-5) Urgency flag (high/medium/low) Key topics (billing, support, product, etc.) Translation Node (Optional) Convert multilingual feedback into a consistent language Instant AI Response System Conditional Node (Routing Logic) Positive feedback → Send thank-you + referral ask Neutral feedback → Follow-up question for details Negative feedback → Escalate to the human team AI Response Generator Node Prompt: "Create a personalized response to [feedback type] about [topic] with sentiment [score]" Adjust tone (professional/friendly/empathetic) Escalation Node Route critical issues to the support team with full context Automated Insights & Alerts Dashboard Node Real-time sentiment tracking Emerging issue detection Alert Node (Slack/Teams/Email) Notify teams of negative trends: "3+ complaints about checkout flow in the past hour!" Report Node Auto-generate weekly/monthly summaries: "Top 5 customer pain points this week" Product Board Integration Auto-create feature requests Prioritize based on feedback volume Tools of the Trade: AI & Automation Tech Stack You don't need a massive, complex tech stack. Focus on a few core, powerful tools: n8n: The workflow automation platform. This is the 'glue' that connects everything and orchestrates the process without needing deep coding knowledge. Honestly, it's incredibly versatile. OpenAI (GPT-4/GPT-4o): State-of-the-art LLM for high-quality text analysis, summarization, and classification. Great for complex understanding. Anthropic (Claude 3 Sonnet/Opus): Another top-tier LLM, known for strong performance in analysis and handling large contexts. Often, a great alternative or complement to GPT models. Feedback Sources APIs: Connectors for where your feedback lives (e.g., Typeform, SurveyMonkey, Twitter API, Zendesk API, Google Play/App Store review APIs). Data Storage/Destination: Where the processed insights go (e.g., Google Sheets, Airtable, Notion, PostgreSQL database, BigQuery). (Optional) Visualization Tool: Tools like Metabase, Grafana, Looker Studio, or Power BI to create dashboards from your structured feedback data. What's the Cost? Estimated Budget Let's talk investment. You're mainly looking at: Setup Costs: Primarily your time (or a consultant's) to design and build the initial workflow in n8n. Depending on complexity, this could range from a few hours to a few days. No major software licenses are usually needed upfront if using self-hosted n8n or starting with free/low-tier cloud plans. AI API Calls: You pay per usage to OpenAI/Anthropic. Costs depend heavily on volume but can start from $20-$50/month for moderate usage and scale up. Newer models are getting more cost-effective. n8n Hosting: Free if self-hosted (requires a server), or tiered cloud pricing starting around $20/month. Feedback Source APIs: Some platforms might have API access costs or rate limits on free tiers. Total Estimated Monthly Cost: For many businesses, ongoing costs can range from $50 - $500+ per month, highly dependent on feedback volume and AI model choice. The Return on Investment (ROI) is typically rapid. Consider the hours saved from manual analysis, the value of faster issue resolution, preventing churn, and the benefits of making product decisions based on real-time data. It often pays for itself very quickly. Who Benefits? Target Users and Industries This automated feedback loop isn't niche; it's valuable across many sectors and roles: Top Industries: SaaS (Software as a Service): Understanding user friction, feature requests, bug reports. E-commerce & Retail: Analyzing product reviews, post-purchase surveys, and support chats. Hospitality & Travel: Processing guest reviews, survey feedback. Mobile Apps: Monitoring app store reviews, in-app feedback. Financial Services: Gauging customer satisfaction with services, identifying pain points. Key Roles: Product Managers: Prioritizing features, understanding user needs, tracking launch reception. Customer Experience (CX) / Success Managers: Monitoring customer health, identifying churn risks, and improving support processes. Marketing Teams: Understanding brand perception, campaign feedback, and voice of the customer. Support Leads: Identifying recurring issues, measuring support quality, spotting training needs. This approach works for businesses of all sizes, from startups wanting to stay lean and agile to large enterprises needing to manage massive feedback volumes. How to use workflow? Importing a workflow in n8n is a straightforward process that allows you to use pre-built or shared workflows to save time. Below is a step-by-step guide to import a workflow in n8n, based on the official documentation and community resources. Steps to Import a Workflow in n8n 1. Obtain the Workflow JSON Source the Workflow:** Workflows are typically shared as JSON files or code snippets. You might receive them from: The n8n community (e.g., n8n.io workflows page). A colleague or tutorial (e.g., a .json file or copied JSON code). Exported from another n8n instance (see export instructions below if needed). Format:** Ensure you have the workflow in JSON format, either as a file (e.g., workflow.json) or as text copied to your clipboard. 2. Access the n8n Workflow Editor Log in to n8n:** Open your n8n instance (via n8n Cloud or your - self-hosted instance). Navigate to the Workflows tab in the n8n dashboard. Open a New Workflow:** Click Add Workflow to create a blank workflow, or open an existing workflow if you want to merge the imported workflow. 3. Import the Workflow Option 1: Import via JSON Code (Clipboard): In the n8n editor, click the three dots (⋯) in the top-right corner to open the menu. Select Import from Clipboard. Paste the JSON code of the workflow into the provided text box. Click Import to load the workflow into the editor. Option 2: Import via JSON File: In the n8n editor, click the three dots (⋯) in the top-right corner. Select Import from File. Choose the .json file from your computer. Click Open to import the workflow. Note: If the workflow includes nodes for apps requiring credentials (e.g., Google Sheets), you’ll need to configure those credentials separately after importing.
by Mauricio Perera
n8n Workflow: Calculate the Centroid of a Set of Vectors Overview This workflow receives an array of vectors in JSON format, validates that all vectors have the same dimensions, and computes the centroid. It is designed to be reusable across different projects. Workflow Structure Nodes and Their Functions: Receive Vectors (Webhook): Accepts a GET request containing an array of vectors in the vectors parameter. Expected Input: vectors parameter in JSON format. Example Request: /webhook/centroid?vectors=[[2,3,4],[4,5,6],[6,7,8]] Output: Passes the received data to the next node. Extract & Parse Vectors (Set Node): Converts the input string into a proper JSON array for processing. Ensures vectors is a valid array. If the parameter is missing, it may generate an error. Expected Output Example: { "vectors": [[2,3,4],[4,5,6],[6,7,8]] } Validate & Compute Centroid (Code Node): Validates vector dimensions and calculates the centroid. Validation: Ensures all vectors have the same number of dimensions. Computation: Averages each dimension to determine the centroid. If validation fails: Returns an error message indicating inconsistent dimensions. Successful Output Example: { "centroid": [4,5,6] } Error Output Example: { "error": "Vectors have inconsistent dimensions." } Return Centroid Response (Respond to Webhook Node): Sends the final response back to the client. If the computation is successful, it returns the centroid. If an error occurs, it returns a descriptive error message. Example Response: { "centroid": [4, 5, 6] } Inputs JSON array of vectors, where each vector is an array of numerical values. Example Input { "vectors": [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ] } Setup Guide Create a new workflow in n8n. Add a Webhook node (Receive Vectors) to receive JSON input. Add a Set node (Extract & Parse Vectors) to extract and convert the data. Add a Code node (Validate & Compute Centroid) to: Validate dimensions. Compute the centroid. Add a Respond to Webhook node (Return Centroid Response) to return the result. Function Node Script Example const input = items[0].json; const vectors = input.vectors; if (!Array.isArray(vectors) || vectors.length === 0) { return [{ json: { error: "Invalid input: Expected an array of vectors." } }]; } const dimension = vectors[0].length; if (!vectors.every(v => v.length === dimension)) { return [{ json: { error: "Vectors have inconsistent dimensions." } }]; } const centroid = new Array(dimension).fill(0); vectors.forEach(vector => { vector.forEach((val, index) => { centroid[index] += val; }); }); for (let i = 0; i < dimension; i++) { centroid[i] /= vectors.length; } return [{ json: { centroid } }]; Testing Use a tool like Postman or the n8n UI to send sample inputs and verify the responses. Modify the input vectors to test different scenarios. This workflow provides a simple yet flexible solution for vector centroid computation, ensuring validation and reliability.