by Johan Denoyer
How it works 1) Extracts all company entries in Agile CRM 2) Search for company name in French INSEE OpenData database to extract address and government ID (SIREN) 3) Updates entries with data extracted from French Insee OpenData dabase Workflow also has a readonly feature to make sure entry is not overwritten. Setup steps Add your AgileCRM credentials Add your INSEE OpenData credentials Add two company custom fields in your Agile CRM (for SIREN data and ReadOnly support)
by Mark Shcherbakov
Video Guide I prepared a comprehensive guide detailing how to automate the parsing of invoices using n8n and LlamaParse, seamlessly capturing and storing vital billing information. Youtube Link Who is this for? This workflow is ideal for finance teams, accountants, and business operations managers who need to streamline invoice processing. It is particularly helpful for organizations seeking to reduce manual entry errors and improve efficiency in managing billing information. What problem does this workflow solve? Manually processing invoices can be time-consuming and error-prone. This automation eliminates the need for manual data entry by capturing invoice details directly from uploaded documents and storing structured data efficiently. This enhances productivity and accuracy across financial operations. What this workflow does The workflow leverages n8n and LlamaParse to automatically detect new invoices in a designated Google Drive folder, parse essential billing details, and store the extracted data in a structured format. The key functionalities include: Real-time detection of new invoices via Google Drive triggers. Automated HTTP requests to initiate parsing through Lama Cloud. Structured storage of invoice details and line items in a database for future reference. Google Drive Integration: Monitors a specific folder in Google Drive for new invoice uploads. Parsing with LlamaParse: Automatically sends invoices for parsing and processes results through webhooks. Data Storage in Airtable: Creates records for invoices and their associated line items, allowing for detailed tracking. Setup N8N Workflow Google Drive Trigger: Set up a trigger to detect new files in a specified folder dedicated to invoices. File Upload to LlamaParse: Create an HTTP request that sends the invoice file to LlamaParse for parsing, including relevant header settings and webhook URL. Webhook Processing: Establish a webhook node to handle parsed results from LlamaParse, extracting needed invoice details effectively. Invoice Record Creation: Create initial records for invoices in your database using the parsed details received from the webhook. Line Item Processing: Transform string data into structured line item arrays and create individual records for each item linked to the main invoice.
by Agniva Mahata
How it Works: Trigger: The workflow is triggered by a webhook, initiated by an Airtable automation. This automation sends the Book or Chapter record ID and the desired action (e.g., "Generate Book Details," "Generate Chapters," "Generate Chapter Research," "Generate Chapter Content"). Action Routing: A "Switch" node directs the workflow based on the action query parameter received from the webhook. This determines which part of the book creation process will be executed. Data Retrieval: The workflow fetches the relevant book or chapter data from Airtable using the provided recordId. AI Processing: Book Details Generation: If the action is "Generate Book Details," an AI Agent (powered by a Large Language Model (LLM) like Google Gemini and the Perplexity search tool) researches the book idea. It focuses on crafting a compelling book description, identifying the target audience, and conducting general book research to maximize bestseller potential. The research brief is then saved back to Airtable. Chapter Generation: If the action is "Generate Chapters," an LLM generates 7-10 chapter titles and descriptions based on the book idea and previous research. A structured output parser ensures the chapter data is in the correct format. The chapters are then split into individual items and saved as separate records in the "Chapter" table in Airtable, linked to the main book record. Chapter Research Generation: If the action is "Generate Chapter Research," another AI Agent conducts in-depth research on a specific chapter, using the Perplexity search tool multiple times. It focuses on finding stories, case studies, historical events, and expert perspectives to make the chapter engaging and credible. The research is saved back to the "Chapter" record in Airtable. Chapter Content Generation: If the action is "Generate Chapter Content," an LLM writes the full content of the chapter, using the research gathered in the previous step, the overall book research, and the chapter description. The generated content is saved back to the "Chapter" record in Airtable. Airtable Updates: In each of the AI processing steps, the workflow updates the corresponding Airtable record (either "Book" or "Chapter") with the generated results (research, chapter details, or content) and sets the "Action" field back to "Idle." Set Up Steps: Airtable Setup (Estimated time: 10-15 minutes): Copy the Airtable base blueprint: https://airtable.com/appfkz4KUlKvOjtbp/shra78TlDfqLRdSfT. This will create the "Book" and "Chapter" tables with the necessary fields. In the "Book" table, create three Airtable Automations: Trigger: When a record matches conditions -> Action is Generate Book Details Action: Run a script. Use the following script: let autoRoute = input.config(); await fetch(autoRoute.webhookUrl + "?recordId=" + autoRoute.recordId + "&action=" + autoRoute.action); In the script action's configuration, add three "Input variables": webhookUrl (map it to your n8n webhook URL, obtained in the next step) recordId (map it to the Airtable record ID) action (map it to Action) Repeat this process to create two more automations in the "Book" table, identical except triggered when Action is Generate Chapters, respectively. In the "Chapter" table, create two Airtable Automations: Trigger: When a record matches conditions -> Action is Generate Chapter Research Action: Run a script (use the same script as above, with the same input variables). Create a second automation, identical except triggered when Action is Generate Chapter Content. n8n Setup (Estimated time: 15-20 minutes): Import the provided JSON workflow into n8n. Webhook Node: Copy the "Test URL" from the Webhook node. This is the webhookUrl you'll use in the Airtable automations. Important: Once you've tested and are ready to go live, switch to the "Production URL." Airtable Nodes: Configure all Airtable nodes (there are eight). You'll need to connect your Airtable account using OAuth 2. Select the correct Base ("Book Agency \[v1] Cobuild" or whatever you named it) and Table ("Book" or "Chapter") for each node. The field mappings are already defined in the template, but double-check them. LLM Nodes (Google Gemini & OpenAI): Connect your Google Gemini and OpenAI accounts to the respective LLM nodes. You'll need API keys for both. You may also configure different LLM Models. Perplexity Nodes Connect your Perplexity AI API to the Perplexity nodes. You'll need API keys for that. Activate the workflow. Testing (Estimated Time: 5-10 minutes): Go to your Airtable "Book" table. Create a New Record. Fill in the "Idea" field with a book concept. Change the "Action" field to "Generate Book Details". The Airtable automation should trigger, sending a request to your n8n webhook. Monitor the n8n execution log to see the workflow in action. Check the Airtable record to see if the "Research" field is populated. Repeat the testing for Generate Chapters, Generate Chapter Research and Generate Chapter Content.
by Onur
Turn BBC News Articles into Podcasts using Hugging Face and Google Gemini Effortlessly transform BBC news articles into engaging podcasts with this automated n8n workflow. Who is this for? This template is perfect for: Content creators** who want to quickly produce podcasts from current events. Students** looking for an efficient way to create audio content for projects or assignments. Individuals** interested in generating their own podcasts without technical expertise. Setup Information Install n8n: If you haven't already, download and install n8n from n8n.io. Import the Workflow: Copy the JSON code for this workflow and import it into your n8n instance. Configure Credentials: Gemini API: Set up your Gemini API credentials in the workflow's LLM nodes. Hugging Face Token: Obtain an access token from Hugging Face and add it to the HTTP Request node for the text-to-speech model. Customize (Optional): Filtering Criteria: Adjust the News Classifier node to fine-tune the selection of news articles based on your preferences. Output Options: Modify the workflow to save the generated audio file to a cloud storage service or publish it to a podcast hosting platform. Prerequisites An active n8n instance. Basic understanding of n8n workflows (no coding required). API credentials for Gemini and a Hugging Face account with an access token. What problem does it solve? This workflow eliminates the manual effort involved in creating podcasts from news articles. It automates the entire process, from fetching and filtering news to generating the final audio file. What are the benefits? Time-saving:** Create podcasts in minutes, not hours. Easy to use:** No coding or technical skills required. Customizable:** Adapt the workflow to your specific needs and preferences. Cost-effective:** Leverage free or low-cost services like Gemini and Hugging Face. How does it work? The workflow fetches news articles from the BBC website. It filters articles based on their suitability for a podcast. It extracts the full content of the selected articles. It uses Gemini LLM to create a podcast script. It converts the script to speech using Hugging Face's text-to-speech model. The final podcast audio is ready for use. Nodes in the Workflow Fetch BBC News Page: Retrieves the main BBC News page. News Classifier: Categorizes news articles using Gemini LLM. Fetch BBC News Detail: Extracts detailed content from suitable articles. Basic Podcast LLM Chain: Generates a podcast script using Gemini LLM. HTTP Request: Converts the script to speech using Hugging Face. Add Story I'm excited to share this workflow with the n8n community and help content creators and students easily produce engaging podcasts! Additional Tips Explore the n8n documentation and community resources for more advanced customization options. Experiment with different filtering criteria and LLM prompts to achieve your desired podcast style.
by Jaruphat J.
Overview This workflow automatically saves files received via LINE Messaging API into Google Drive and logs the file details into a Google Sheet. It checks the file type against allowed types, organizes files into date-based folders and (optionally) file type–specific subfolders, and sends a reply message back to the LINE user with the file URL or an error message if the file type is not permitted. Who is this for? Developers & IT Administrators: Looking to integrate LINE with Google Drive and Sheets for automated file management. Businesses & Marketing Teams: That want to automatically archive media files and documents received from users via LINE. Anyone Interested in No-Code Automation: Users who want to leverage n8n’s capabilities without heavy coding. What Problem Does This Workflow Solve? Automated File Organization: Files received from LINE are automatically checked for allowed file types, then stored in a structured folder hierarchy in Google Drive (by date and/or file type). Data Logging: Each file upload is recorded in a Google Sheet, providing an audit trail with file names, upload dates, URLs, and types. Instant Feedback: Users receive an immediate reply via LINE confirming the file upload, or an error message if the file type is not allowed. What This Workflow Does 1. Receives Incoming Requests: A webhook node ("LINE Webhook Listener") listens for POST requests from LINE, capturing file upload events and associated metadata. 2. Configuration Loading: A Google Sheets node ("Get Config") reads configuration data (e.g., parent folder ID, allowed file types, folder organization settings, and credentials) from a pre-defined sheet. Data Merging & Processing: The "Merge Event and Config Data" and "Process Event and Config Data" nodes merge and structure the event data with configuration settings. A "Determine Folder Info" node calculates folder names based on the configuration. If Store by Date is enabled, it uses the current date (or a specified date) as the folder name. If Store by File Type is also enabled, it uses the file’s type (e.g., image) for a subfolder. 4. Folder Search & Creation: The workflow searches for an existing date folder ("Search Date Folder"). If the date folder is not found, an IF node ("Check Existing Date Folder") routes to a "Create Date Folder" node. Similarly, for file type organization, the workflow uses a "Search FileType Folder" node (with appropriate conditions) to look for a subfolder, or creates it if not found. The "Set Date Folder ID" and "Set Image Folder ID" nodes capture and merge the resulting folder IDs. Finally, the "Config final ParentId" node sets the final target folder ID based on the configuration conditions: Store by Date: TRUE, Store by File Type: TRUE: Use the file type folder (inside the date folder). Store by Date: TRUE, Store by File Type: FALSE: Use the date folder. Store by Date: FALSE, Store by File Type: TRUE: Use the file type folder. Store by Date: FALSE, Store by File Type: FALSE: Use the Parent Folder ID from the configuration. 5. File Retrieval and Validation: A HTTP Request node ("Get File Binary Content") fetches the file’s binary data from the LINE API. A Function node ("Validate File Type") checks if the file’s MIME type is included in the allowed list (e.g., "audio|image|video"). If not, it throws an error that is captured for the reply. 6. File Upload and Logging: The "Upload File to Google Drive" node uploads the validated binary file to the final target folder. After a successful upload, the "Log File Details to Google Sheet" node logs details such as file name, upload date, Google Drive URL, and file type into a designated Google Sheet. 7. User Feedback: The "Check Reply Enabled Flag" node checks if the reply feature is enabled. Finally, the "Send LINE Reply Message" node sends a reply message back to the LINE user with either the file URL (if the upload was successful) or an error message (if the file type was not allowed). Setup Instructions 1. Google Sheets Setup: Create a Google Sheet with two sheets:** config: Include columns for Parent Folder Path, Parent Folder ID, Store by Date (boolean), Store by File Type (boolean), Allow File Types (e.g., “audio|image|video”), CurrentDate, Reply Enabled, and CHANNEL ACCESS TOKEN. fileList: Create headers for File Name, Date Uploaded, Google Drive URL, and File Type. For an example of the required format, check this Google Sheets template: Google Sheet Template 2. Google Drive Credentials: Set up and authorize your Google Drive credentials in n8n. 3. LINE Messaging API: Configure your LINE Developer Console webhook to point to the n8n Webhook URL ("Line Chat Bot" node). Ensure you have the proper Channel Access Token stored in your Google Sheet. 4. n8n Workflow Import: Import the provided JSON file into your n8n instance. Verify node connections and update any credential references as needed. 5. Test the Workflow: Send a test message via LINE to confirm that files are properly validated, uploaded, logged, and that reply messages are sent. How to Customize This Workflow Allowed File Types: Adjust the "Validate File Type" field in your config sheet to control which file types are accepted. Folder Structure: Modify the logic in the "Determine Folder Info" and subsequent folder nodes to change how folders are structured (e.g., use different date formats or add additional categorization). Logging: Update the "Log File Details to Google Sheet" node if you wish to log additional file metadata. Reply Messages: Customize the reply text in the "Send LINE Reply Message" node to include more detailed information or instructions.
by Jimleuk
Ever wanted to build your own RAG search over Youtube videos? Well, now you can! This n8n template shows how you can build a very capable Youtube search engine powered by Apify, Qdrant and your LLM of choice to quickly and efficiently browse over many videos for research. I originally started to template to ask questions on the "n8n @ scale office-hours" livestream videos but then extended it to include the latest videos on the official channel. Check out a demo here: https://jimleuk.app.n8n.cloud/webhook/n8n_videos How it works Stage 1 is to collect the Youtube video transcripts and push them into a vector database. For this, I've used Apify to scrape Youtube and Qdrant to store the embeddings. Transcripts are broken down into smaller chunks and carefully tagged with metadata to assist in later search and filtering. Stage 2 is to build a web frontend for the user to query the vectorised transcripts. I'm using a webhook to serve a simple web app and API to dynamically fetch the results. When searching for a video, I've opted to use Qdrant's search groups API which in this use-case, performs better as it returns a wider range of videos results. In the web frontend, when the user clicks on the results, the matching Youtube video plays in an embedded video player. How to use Once credentials are all set, first run steps 1 - 3 to populate your vector store. Next, set the workflow to active to expose the web frontend. Visit the webhook URL in your browser to use it. If only for personal use, you may want to remove the rate limiting mechanism in step 4. Requirements Apify for Youtube Channel and Video Scraping Qdrant for Vector store OpenAI for LLM and Embeddings Customising the template Not interested in official n8n videos? Swap to a different channel - this template will work on many as long as videos are not private or set to prevent embeds. Technically any vector store should work but may not have the same grouping API. Use the simple vector store node and revert back to basic searching instead.
by Miquel Colomer
This n8n workflow template automates the process of finding LinkedIn profiles for a person based on their name, and company. It scrapes Google search results via Bright Data, parses the results with GPT-4o-mini, and delivers a personalized follow-up email with insights and suggested outreach steps. 🚀 What It Does Accepts a user-submitted form with a person’s full name, and company. Performs a Google search using Bright Data to find LinkedIn profiles and company data. Uses GPT-4o-mini to parse HTML results and identify matching profiles. Filters and selects the most relevant LinkedIn entry. Analyzes the data to generate a buyer persona and follow-up strategy. Sends a styled email with insights and outreach steps. 🛠️ Step-by-Step Setup Deploy the form trigger to accept person data (name, position, company). Build a Google search query from user input. Scrape search results using Bright Data. Extract HTML content using the HTML node. Use GPT-4o-mini to parse LinkedIn entries and company insights. Filter for matches based on user input. Merge relevant data and generate personalized outreach content. Send email to a predefined address. Show a final confirmation message to the user. 🧠 How It Works: Workflow Overview Trigger:** When User Completes Form Search:** Edit Url LinkedIn, Get LinkedIn Entry on Google, Extract Body and Title, Parse Google Results Matching:** Extract Parsed Results, Filter, Limit, IF LinkedIn Profile is Found? Fallback:** Form Not Found if no match Company Lookup:** Edit Company Search, Get Company on Google, Parse Results, Split Out Content Generation:** Merge, Create a Followup for Company and Person Email Delivery:** Send Email, Form Email Sent 📨 Final Output An HTML-styled email (using Tailwind CSS) with: Matched LinkedIn profile Company insights Persona-based outreach strategy 🔐 Credentials Used BrightData account** for scraping Google search results OpenAI account** for GPT-4o-mini-powered parsing and content generation SMTP account** for sending follow-up emails ❓Questions? Template and node created by Miquel Colomer and n8nhackers. Need help customizing or deploying? Contact us for consulting and support.
by Eduard
This workflow demonstrates three distinct approaches to chaining LLM operations using Claude 3.7 Sonnet. Connect to any section to experience the differences in implementation, performance, and capabilities. What you'll find: 1️⃣ Naive Sequential Chaining The simplest but least efficient approach - connecting LLM nodes in a direct sequence. Easy to set up for beginners but becomes unwieldy and slow as your chain grows. 2️⃣ Agent-Based Processing with Memory Process a list of instructions through a single AI Agent that maintains conversation history. This structured approach provides better context management while keeping your workflow organized. 3️⃣ Parallel Processing for Maximum Speed Split your prompts and process them simultaneously for much faster results. Ideal when you need to run multiple independent tasks without shared context. Setup Instructions: API Credentials: Configure your Anthropic API key in the credentials manager. This workflow uses Claude 3.7 Sonnet, but you can modify the model in each Anthropic Chat Model node, or pick an entirely different LLM. For Cloud Users: If using the parallel processing method (section 3), replace {{ $env.WEBHOOK_URL }} in the "LLM steps - parallel" HTTP Request node with your n8n instance URL. Test Data: The workflow fetches content from the n8n blog by default. You can modify this part to use a different content or a data source. Customization: Each section contains a set of example prompts. Modify the "Initial prompts" nodes to change the questions asked to the LLM. Compare these methods to understand the trade-offs between simplicity, speed, and context management in your AI workflows! Follow me on LinkedIn for more tips on AI automation and n8n workflows!
by Jimleuk
This n8n demonstrates how to build your own Qdrant MCP server to extend its functionality beyond that of the official implementation. This n8n implementation exposes other cool API features from Qdrant such as facet search, grouped search and recommendations APIs. With this, we can build an easily customisable and maintainable Qdrant MCP server for business intelligence. This MCP example is based off an official MCP reference implementation which can be found here - https://github.com/qdrant/mcp-server-qdrant How it works A MCP server trigger is used and connected to 5 custom workflow tools. We're using custom workflow tools as there is quite a few nodes required for each task. We use a mix of n8n supported Qdrant nodes for simple operations such as insert documents and similarity search, and HTTP node to hit the Qdrant API directly for Facet search, group search and recommendations. We use "Edit Field" and "Aggregate" nodes to return suitable responses to the MCP client. How to use This Qdrant MCP server allows any compatible MCP client to manage a Qdrant Collection by supporting select and create operations. You will need to have a collection available before you can use this server. Use the Prerequisite manual steps to get started! Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop Try the following queries in your MCP client: "Can you help me list the available companies in the collection?" "What do customers say about product deliveries from company X?" "What do customers of company X and company Y say about product ease of use?" Requirements Qdrant for vector store. This can be an a cloud-hosted instance or one you can self-host internally. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow Depending on what queries you'll receive, adjust the tool inputs to make it easier for the agent to set the right parameters. Not interested in Reviews? The techniques shared in this template can be used for other types of collections. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!
by n8n Team
This workflow sends a message to a Discord channel when a new row is added or a row is updated in a Google Sheet. The message will send all data rows in the Google Sheet. Prerequisites Discord account and Discord credentials. Google account and Google credentials. How it works Using a code node, we can use the obtained Google Sheet data to create a custom message that will be sent to Discord. The message will be sent to the Discord channel specified in the Discord node. Setup This workflow requires that you set up a Discord webhook and have an existing Google Sheet with data. See how to set up a Discord webhook here.
by Samir Saci
Tags: EU Legislation, Sustainability, Automation, Web Scraping, OpenAI, Google Sheets, Policy Monitoring, Climate Context Hey! I’m Samir, a Supply Chain Engineer and Data Scientist from Paris, and the founder of LogiGreen Consulting. We use AI, automation, and data to support sustainable business practices for small, medium and large companies. This workflow is part of our broader initiative to monitor and act on sustainability legislation in Europe. > How do you know if new EU laws will impact your business's sustainability goals? This n8n workflow automatically scrapes the EU Parliament’s legislative portal to find and flag procedures related to environmental sustainability. 📬 For business inquiries, feel free to connect with me on LinkedIn Who is this template for? This workflow is useful for: Sustainability consultants** monitoring legal frameworks NGOs and researchers** tracking environmental regulations Companies* aligning with *CSRD* or *EU Green Deal** objectives Policy analysts** looking for automation tools What does it do? This n8n workflow: 🌐 Scrapes the EU Parliament legislative portal for yesterday’s entries 🧠 Uses OpenAI to classify if each procedure is related to sustainability 🗂️ Filters out irrelevant items 📊 Saves the results in a Google Sheet ✅ Creates a Google Task for each relevant file to review the legislation How it works Trigger manually or on schedule Scrape HTML blocks for scheduled debates Parse each procedure to extract Title, Committee, Rapporteur, PDF link Call GPT-4-turbo to check if the topic matches sustainability criteria Filter responses based on “yes” or “no” Store valid items into Google Sheets Generate tasks in Google Tasks The AI only flags procedures that directly impact the environment, circular economy, or pollution control. What do I need to get started? You’ll need: A Google Sheet connected to your n8n instance An OpenAI account with GPT-4 access A Google Task List Follow the Guide! Follow the sticky notes in the workflow or check my tutorial to configure each node and start using AI to monitor sustainability regulations in Europe. 🎥 Watch My Tutorial Notes AI filters are strict — you can customise the system prompt to match your needs This is ideal for tracking legislative risk for climate regulations This workflow was built using n8n version 1.85.4 Submitted: April 21, 2025
by Ranjan Dailata
Who this is for? Indeed Data Scraper & Summarization with Airtable, Bright Data and Google Gemini is an automated workflow that extracts company profile information from Indeed using Bright Data Web Unlocker, transforms the data using Google Gemini's LLM, and forward the transformed response with the summary to a specified webhook for downstream use. This workflow is tailored for: Recruiters and HR teams who want quick summaries of companies listed on Indeed. Market researchers and analysts needing structured insights into businesses. Founders, investors, and consultants scouting potential competitors, partners, or clients. No-code enthusiasts looking to automate data extraction and enrichment pipelines without manual scraping or parsing. What problem is this workflow solving? Manually gathering structured information about companies on Indeed is time-consuming and inconsistent. Pages vary in structure, and extracting clean, digestible summaries can require technical scraping expertise. This workflow automates: Extracting company data from Indeed reliably using Bright Data Web Unlocker. Cleaning and summarizing the extracted content using Google Gemini LLM. Storing structured insights directly into Airtable for easy access and further workflows. Eliminates manual research, saves hours, and produces AI-enhanced, easily searchable records. What this workflow does Triggers on-demand. Pulls company page URLs from Airtable. Scrapes content from each Indeed company profile using Bright Data Web Unlocker. Sends the raw HTML to Google Gemini for extraction and summarization. Sends the summarized data to other platforms via a Webhook notification mechanism. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials for Bright Data. The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the Airtable Personal Access Token account under Credentials. Update the Webhook Notifier with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a company or a market researcher, entrepreneur, or data analyst. Here's how you can adapt it to fit your specific use case: Extend the scraper**: Modify Bright Data targets to pull job listings, salaries, or employee reviews via the Airtable data source. Customize the summary prompt**: Ask Gemini to extract different attributes hiring trends, practices etc. Routing the output to different destinations**: Send summaries or transformed response to Google Sheets, Airtable, or CRMs like HubSpot or Salesforce etc.