by InfraNodus
Optimize Your Top Performing Website Content with Google Analytics, Firecrawl, and InfraNodus This templates helps you extract** the top performing pages from your website using Google Analytics scrape** the content of the pages using Firecrawl API (HTTP node provided) build a knowledge graph* for all these pages with the *topics* and *gaps** identified using InfraNodus understand the main concepts and topical clusters in your top-performing content, so you can create more of it, while also identifying the content gaps — structural holes between the topics that you can use to generate new content ideas have access to a knowledge graph visualization of your top performing content to explore it using the interactive network interface How it works This template uses the InfraNodus to visualize and analyze your top performing content. It will extract the top pages from the Google Analytics data for the website you choose and scrape their text content using the high-quality Firecrawl API. Then it will ingest every page into an InfraNodus graph you specify. The graph can be used to explore the content visually. The insights from the graph, such as the main topics and gaps between them will be shown to you in the end of the workflow. You can use these insights to understand what kind of content you should focus on creating to get the highest number of views* and to establish *topical authority* in your area, which is good for *SEO* and *LLM optimization** — focusing on the topics identified in the top content discover the content gaps — which topics are not connected yet that you could link with new content ideas and publish — this caters to your audience's interests, but connects your existing ideas in a new way. So you deliver the content that's relevant but also novel. Here's a description step by step: Note:* you can replace the PDF to Text convertor node with a better quality *PDF convertor* from ConvertAPI which respects the original file layout and doesn't split text into small chunks Trigger the workflow Extract a list of top (25, 50) pages from your Google Analytics account (you'll need to connect it via the Google Cloud API) Fix the extracted data and add a correct URL prefix to each page (if your Analytics has relative paths only Loop through each page extracted Extract the text content of every page using the high-quality Firecrawl API Ingest the text content into the InfraNodus graph that you specify Once all the pages are ingested into the InfraNodus graph, access the AI insights endpoint in InfraNodus and get the information about the main topics and gaps Display this information to the user How to use You need an InfraNodus API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Requirements An InfraNodus account and API key Optional: A Google Analytics account for your property (alternatively, you can modify this workflow to provide a list of the most popular pages) Optional: A Google Cloud API access (to access the data from Google Analytic saccount — follow the n8n instructions) Optional: A Firecrawl API key API key for better quality web page scraping (otherwise, use the standard HTTP to Text node from n8n) Customizing this workflow You can customize this workflow by using a list of the URL pages you want to analyze from a Google sheet. Alternatively, you can use the Google SERP node to extract top search results for a query and get the main topics for them. For support and feedback, please, contact us at https://support.noduslabs.com To learn more about InfraNodus: https://infranodus.com
by Solido AI
How it works: This system functions by receiving expenses via webhook POST. It validates the data, stores it in Google Sheets, and, daily at 8 PM, generates and sends financial summaries. Automatic categorization simplifies the organization of expenses. Set up steps: Setup involves creating the Google Sheet, configuring the webhook, and defining the categorization rules. The process is quick and intuitive, taking about 10-15 minutes for the system to be ready to receive your expenses.
by Ranjan Dailata
Who this is for? The LinkedIn Company Story Generator is an automated workflow that extracts company profile data from LinkedIn using Bright Data's web scraping infrastructure, then transforms that data into a professionally written narrative or story using a language model (e.g., OpenAI, Gemini). The final output is sent via webhook notification, making it easy to publish, review, or further automate. This workflow is tailored for: Marketing Professionals**: Seeking to generate compelling company narratives for campaigns. Sales Teams**: Aiming to understand potential clients through summarized company insights. Content Creators**: Looking to craft stories or articles based on company data. Recruiters**: Interested in obtaining concise overviews of companies for talent acquisition strategies. What problem is this workflow solving? Manually gathering and summarizing company information from LinkedIn can be time-consuming and inconsistent. This workflow automates the process, ensuring: Efficiency**: Quick extraction and summarization of company data. Consistency**: Standardized summaries for uniformity across use cases. Scalability**: Ability to process multiple companies without additional manual effort. What this workflow does The workflow performs the following steps: Input Acquisition**: Receives a company's name or LinkedIn URL as input. Data Extraction**: Utilizes Bright Data to scrape the company's LinkedIn profile. Information Parsing**: Processes the extracted HTML content to retrieve relevant company details. Summarization**: Employs AI Google Gemini to generate a concise company story. Output Delivery**: Sends the summarized content to a specified webhook or email address. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the LinkedIn URL by navigating to the Set LinkedIn URL node. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Input Variations: Modify the **Set LinkedIn URL node to accept a different company LinkedIn URL. Data Points**: Adjust the HTML Data Extractor Node to retrieve additional details like employee count, industry, or headquarters location. Summarization Style**: Customize the AI prompt to generate summaries in different tones or formats (e.g., formal, casual, bullet points). Output Destinations**: Configure the output node to send summaries to various platforms, such as Slack, CRM systems, or databases.
by KPendic
How it works This workflow simply exports all your CloudFlare domains to Google Sheet to get high overview of all of your settings. This could help for easy debugging, searching or similar needs. In flow simple pagging nodes are used to iterate over all your domains, because this list could be huge. For each host we are merging DNS & Settings and transforming them into columns for all our domains. Requirements For storing and processing of data in this flow you will need: CloudFlare.com API key/token - for retrieving your data (https://dash.cloudflare.com/:account/api-tokens) (need full access) Google Spreadsheet auth connected in your n8n Credentials Google Spreadsheet template - you can copy my sheet as starting point, start by copying it to your account Match Sheet ID in 'Export' node to your newly created. Official CloudFlare api Documentation For full details and specifications please use API documentation from: https://developers.cloudflare.com/api/ Potential API timeouts If you encounter CF API timeouts - I would suggest to only put somewhere in the loop simple sleep/wait node - for couple of seconds - and it should resolve timeouts. Google Sheet I've used simple Google Sheet feature conditional formatting to visually distinct my on|off toggles that was of my interest to easily get high overview for debuggint some of the settings on my hosts - but please use your own logic or change it completely.
by David Ashby
🛠️ Clearbit Tool MCP Server Complete MCP server exposing all Clearbit Tool operations to AI agents. Zero configuration needed - all 3 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Clearbit Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Clearbit Tool tool with full error handling 📋 Available Operations (3 total) Every possible Clearbit Tool operation is included: 🔧 Company (2 operations) • Autocomplete a company • Enrich a company 👥 Person (1 operations) • Enrich a person 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Clearbit Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Clearbit Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Mark Shcherbakov
Video Guide I prepared a detailed guide explaining how to build an AI-powered meeting assistant that provides real-time transcription and insights during virtual meetings. Youtube Link Who is this for? This workflow is ideal for business professionals, project managers, and team leaders who require effective transcription of meetings for improved documentation and note-taking. It's particularly beneficial for those who conduct frequent virtual meetings across various platforms like Zoom and Google Meet. What problem does this workflow solve? Transcribing meetings manually can be tedious and prone to error. This workflow automates the transcription process in real-time, ensuring that key discussions and decisions are accurately captured and easily accessible for later review, thus enhancing productivity and clarity in communications. What this workflow does The workflow employs an AI-powered assistant to join virtual meetings and capture discussions through real-time transcription. Key functionalities include: Automatic joining of meetings on platforms like Zoom, Google Meet, and others with the ability to provide real-time transcription. Integration with transcription APIs (e.g., AssemblyAI) to deliver seamless and accurate capture of dialogue. Structuring and storing transcriptions efficiently in a database for easy retrieval and analysis. Real-Time Transcription: The assistant captures audio during meetings and transcribes it in real-time, allowing participants to focus on discussions. Keyword Recognition: Key phrases can trigger specific actions, such as noting important points or making prompts to the assistant. Structured Data Management: The assistant maintains a database of transcriptions linked to meeting details for organized storage and quick access later. Setup Preparation Create Recall.ai API key Setup Supabase account and table create table public.data ( id uuid not null default gen_random_uuid (), date_created timestamp with time zone not null default (now() at time zone 'utc'::text), input jsonb null, output jsonb null, constraint data_pkey primary key (id), ) tablespace pg_default; Create OpenAI API key Development Bot Creation: Use a node to create the bot that will join meetings. Provide the meeting URL and set transcription options within the API request. Authentication: Configure authentication settings via a Bearer token for interacting with your transcription service. Webhook Setup: Create a webhook to receive real-time transcription updates, ensuring timely data capture during meetings. Join Meeting: Set the bot to join the specified meeting and actively listen to capture conversations. Transcription Handling: Combine transcription fragments into cohesive sentences and manage dialog arrays for coherence. Trigger Actions on Keywords: Set up keyword recognition that can initiate requests to the OpenAI API for additional interactions based on captured dialogue. Output and Summary Generation: Produce insights and summary notes from the transcriptions that can be stored back into the database for future reference.
by Lucas Peyrin
How it works This template provides a complete, ready-to-use web application for generating high-quality AI prompts. It features a user-friendly web form where you can describe your goal, and it leverages an AI model (Google Gemini) to create a structured, reusable prompt for you. The workflow is a full-stack application built entirely within n8n: Frontend (The Form): A Form Trigger node creates a beautiful, public-facing web form. Here, a user describes the prompt they need and selects which structural components to include (like system instructions, examples, or input variables). Backend (The AI Logic): A LangChain Chain node takes the user's request and constructs a "meta-prompt"—a set of instructions for the AI on how to generate the final prompt. The Google Gemini node executes this meta-prompt, creating a well-structured output with clear sections and tags. The Result (The Webpage): After generation, the user is automatically redirected to a new URL. This URL is handled by another Webhook node, which serves a custom-coded HTML page. This beautiful, dark-themed webpage displays the generated prompt and includes a one-click "Copy" button, making it easy to use the result immediately. This template is a perfect example of how to build interactive web tools with n8n, combining a user interface, backend logic, and a dynamic web response in a single workflow. Set up steps Setup time: ~1-3 minutes This workflow requires a Google AI credential to function. Configure Google AI Credentials: This workflow uses a Google Gemini model. You will need a Google AI API key. In n8n, go to Credentials and click Add credential. Search for Google Gemini and enter your API key. Go back to the workflow, open the Gemini 2.5 Flash node, and select your newly created credential from the dropdown. Activate the Workflow: Click the Active toggle in the top-right corner to turn the workflow on. Access Your Prompt Maker: Open the Prompt Request (Form Trigger) node. Copy the Public URL provided. This is the link to your new web application! Open the link in your browser, fill out the form, and see the magic happen. Note: This workflow uses environment variables like {{ $env.WEBHOOK_URL }} to build the redirect URL. These are typically set automatically by n8n and should work out-of-the-box on most standard n8n setups.
by Akhil Varma Gadiraju
n8n Workflow: Sync Workflows with GitLab How It Works This workflow ensures that your self-hosted n8n workflows are version-controlled in a GitLab repository. It compares each current workflow from n8n with its stored counterpart in GitLab. If any differences are detected, the GitLab file is updated with the latest version. Core Logic: Retrieve Workflows – Fetch all workflows from the n8n REST API. Compare with GitLab – For each workflow, fetch the corresponding file from GitLab and compare the JSON. Update if Changed – If differences exist, commit the updated workflow to GitLab using its API. Setup Before using the workflow, ensure the following: Prerequisites: n8n**: Self-hosted instance with access to the /rest/workflows API. GitLab**: A repository where workflows will be stored, and a Personal Access Token (PAT) with api and write_repository permissions. n8n Nodes Required**: HTTP Request (to call n8n and GitLab APIs) Code or Function nodes (for diffing and formatting) Looping (SplitInBatches or similar) Configuration: Set environment variables or workflow credentials for: GITLAB_TOKEN GITLAB_REPO GITLAB_BRANCH (e.g., main) GITLAB_FILE_PATH_PREFIX (e.g., n8n-workflows/) How to Use Import the Workflow into your n8n instance. Configure GitLab API Credentials: Set the GitLab PAT as a header in the HTTP Request node: Private-Token: {{ $env.GITLAB_TOKEN }} Map Workflows to GitLab Paths: Use the workflow name or ID to create the file path. Example: n8n-workflows/workflow-name.json Trigger the Workflow: Can be manually triggered, or scheduled to run at intervals (e.g., daily). Review Commits in GitLab: Each updated workflow will be committed with a message like: "Update workflow: Sample Workflow" Disclaimer This workflow does not handle merge conflicts or manual edits made directly in GitLab. Always ensure proper coordination if multiple sources are modifying workflows. Only structural changes are tracked. Non-functional metadata (like timestamps or IDs) may trigger false positives unless filtered. Use at your own risk. Test in a safe environment before applying to production workflows.
by Ranjan Dailata
Disclaimer This template is only available on n8n self-hosted as it's making use of the community node for MCP Client. Who this is for? The Chat Conversations with Bright Data MCP Search Engines & Google Gemini workflow is designed for users who need real-time, AI-enhanced conversations powered by live search engine results. This workflow is tailored for: Data Analysts - Who want live, search-based data fused with AI reasoning. Marketing Researchers - Seeking up-to-the-minute market or competitor insights via conversational AI. Product Managers - Exploring user needs, market trends, and competitor analysis in real time. AI Developers - Building dynamic applications that combine live search data with intelligent conversation agents. Growth Hackers - Who need fast, conversational research tools for campaign ideation, outreach, or content creation. What problem is this workflow solving? Traditional chatbots and AI systems often rely on static, outdated data. This workflow enables AI agents to fetch live search engine data and converse intelligently about it, making interactions dynamic, accurate, and highly contextual. This workflow solves the major gaps of: Outdated Knowledge: Regular chatbots lack up-to-date information from live web searches. Manual Search Fatigue: Manually searching for information and interpreting it is time-consuming. Context Bridging: Connecting search results into meaningful, conversational replies requires human-level reasoning. What this workflow does? Accepts a user's conversational query input. Triggers a search request to Bright Data’s MCP Search Engines API (Google, Bing, etc.) based on the query. Waits for the search task to complete. Retrieves real-time search results. Feeds the search results and original question into Google Gemini. Generates a human-like, contextually accurate AI response combining live information and conversational flow. Outputs the response back into a chat app. Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Also, do "Account Setup" as mentioned in the @brightdata/mcp URL. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data Web Unlocker API Token within the Environments textbox above as API_TOKEN=<your-token>. Update the HTTP Request for Webhook Notification node for sending the Webhook notification for chat responses. How to customize this workflow to your needs Change Search Engine: Add or Remove the Search Engine MCP tools based upon the Bright Data MCP Server updates. Expand Outputs: Send AI chat responses to Slack, Discord, custom chat UIs, WhatsApp, or CRM systems. Store conversation logs in a database (PostgreSQL, MongoDB, etc.) for future audits or training.
by Ranjan Dailata
Who this is for? Indeed Data Scraper & Summarization with Airtable, Bright Data and Google Gemini is an automated workflow that extracts company profile information from Indeed using Bright Data Web Unlocker, transforms the data using Google Gemini's LLM, and forward the transformed response with the summary to a specified webhook for downstream use. This workflow is tailored for: Recruiters and HR teams who want quick summaries of companies listed on Indeed. Market researchers and analysts needing structured insights into businesses. Founders, investors, and consultants scouting potential competitors, partners, or clients. No-code enthusiasts looking to automate data extraction and enrichment pipelines without manual scraping or parsing. What problem is this workflow solving? Manually gathering structured information about companies on Indeed is time-consuming and inconsistent. Pages vary in structure, and extracting clean, digestible summaries can require technical scraping expertise. This workflow automates: Extracting company data from Indeed reliably using Bright Data Web Unlocker. Cleaning and summarizing the extracted content using Google Gemini LLM. Storing structured insights directly into Airtable for easy access and further workflows. Eliminates manual research, saves hours, and produces AI-enhanced, easily searchable records. What this workflow does Triggers on-demand. Pulls company page URLs from Airtable. Scrapes content from each Indeed company profile using Bright Data Web Unlocker. Sends the raw HTML to Google Gemini for extraction and summarization. Sends the summarized data to other platforms via a Webhook notification mechanism. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials for Bright Data. The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the Airtable Personal Access Token account under Credentials. Update the Webhook Notifier with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a company or a market researcher, entrepreneur, or data analyst. Here's how you can adapt it to fit your specific use case: Extend the scraper**: Modify Bright Data targets to pull job listings, salaries, or employee reviews via the Airtable data source. Customize the summary prompt**: Ask Gemini to extract different attributes hiring trends, practices etc. Routing the output to different destinations**: Send summaries or transformed response to Google Sheets, Airtable, or CRMs like HubSpot or Salesforce etc.
by Jimleuk
This n8n demonstrates how to build your own Qdrant MCP server to extend its functionality beyond that of the official implementation. This n8n implementation exposes other cool API features from Qdrant such as facet search, grouped search and recommendations APIs. With this, we can build an easily customisable and maintainable Qdrant MCP server for business intelligence. This MCP example is based off an official MCP reference implementation which can be found here - https://github.com/qdrant/mcp-server-qdrant How it works A MCP server trigger is used and connected to 5 custom workflow tools. We're using custom workflow tools as there is quite a few nodes required for each task. We use a mix of n8n supported Qdrant nodes for simple operations such as insert documents and similarity search, and HTTP node to hit the Qdrant API directly for Facet search, group search and recommendations. We use "Edit Field" and "Aggregate" nodes to return suitable responses to the MCP client. How to use This Qdrant MCP server allows any compatible MCP client to manage a Qdrant Collection by supporting select and create operations. You will need to have a collection available before you can use this server. Use the Prerequisite manual steps to get started! Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop Try the following queries in your MCP client: "Can you help me list the available companies in the collection?" "What do customers say about product deliveries from company X?" "What do customers of company X and company Y say about product ease of use?" Requirements Qdrant for vector store. This can be an a cloud-hosted instance or one you can self-host internally. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow Depending on what queries you'll receive, adjust the tool inputs to make it easier for the agent to set the right parameters. Not interested in Reviews? The techniques shared in this template can be used for other types of collections. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!
by Ranjan Dailata
Disclaimer This template is only available on n8n self-hosted as it's making use of the community node for MCP Client. Who this is for? The Extract, Transform LinkedIn Data with Bright Data MCP Server & Google Gemini workflow is an automated solution that scrapes LinkedIn content via Bright Data MCP Server then transforms the response using a Gemini LLM. The final output is sent via webhook notification and also persisted on disk. This workflow is tailored for: Data Analysts : Who require structured LinkedIn datasets for analytics and reporting. Marketing and Sales Teams : Looking to enrich lead databases, track company updates, and identify market trends. Recruiters and Talent Acquisition Specialists : Who want to automate candidate sourcing and company research. AI Developers : Integrating real-time professional data into intelligent applications. Business Intelligence Teams : Needing current and comprehensive LinkedIn data to drive strategic decisions. What problem is this workflow solving? Gathering structured and meaningful information from the web is traditionally slow, manual, and error-prone. This workflow solves: Reliable web scraping using Bright Data MCP Server LinkedIn tools. LinkedIn person and company web scrapping with AI Agents setup with the Bright Data MCP Server tools. Data extraction and transformation with Google Gemini LLM. Persists the LinkedIn person and company info to disk. Performs a Webhook notification with the LinkedIn person and company info. What this workflow does? This n8n workflow performs the following steps: Trigger: Start manually. Input URL(s): Specify the LinkedIn person and company URL. Web Scraping (Bright Data): Use Bright Data's MCP Server, LinkedIn tools for the person and company data extract. Data Transformation & Aggregation: Uses the Google LLM for handling the data transformation. Store / Output: Save results into disk and also performs a Webhook notification. Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the LinkedIn URL person and company workflow. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Different Inputs: Instead of static URLs, accept URLs dynamically via webhook or form submissions. Data Extraction: Modify the LinkedIn Data Extractor node with the suitable prompt to format the data as you wish. Outputs: Update the Webhook endpoints to send the response to Slack channels, Airtable, Notion, CRM systems, etc.