by GuanNan
Who is this for? This template is designed for anyone who wants to integrate MCP with their AI Agents. Whether you're a developer, a data analyst, or an automation enthusiast, if you're looking to leverage the power of MCP and Google Calendar in your n8n workflows, this template is for you. What problem is this workflow solving? This template caters to MCP beginners seeking a hands - on example and developers looking to integrate Google Calendar MCP service. When integrating MCP with Google Calendar, manually updating AI Agents after changes to Google Calendar tools on the MCP Server is time - consuming and error - prone. This template automates the process, enabling the AI Agent to instantly recognize changes made to Google Calendar on the MCP Server. In project management, for example, it ensures that task schedule updates in Google Calendar are automatically detected by the AI Agent. With detailed steps, it simplifies the integration process for all users. What this workflow does This workflow focuses on integrating MCP with Google Calendar within n8n. Specifically, it allows you to build an MCP Server and Client using Google Calendar nodes in n8n. Any changes made to the Google Calendar tools on the MCP Server are automatically recognized by the MCP Client in the workflow. This means that you can make changes to your Google Calendar (such as adding, deleting, or modifying events) on the MCP Server, and the MCP Client in the n8n workflow will immediately detect these changes without any manual intervention. Setup Requirements An active n8n account. Access to Google Calendar API. You need to enable the Google Calendar API, and create the necessary credentials (OAuth 2.0 client ID). Basic knowledge of n8n workflows and MCP concepts. Step - by - step guide Create a new workflow in n8n: Log in to your n8n account and create a new workflow. Add Google Calendar nodes: Search for and add the Google Calendar nodes to your workflow. Configure the nodes with your Google Calendar API credentials. Set up the MCP Server and Client: Use the appropriate nodes in n8n to set up the MCP Server and Client. Connect the Google Calendar nodes to the MCP nodes as required. Test the workflow: Make some changes to your Google Calendar on the MCP Server and check if the MCP Client in the n8n workflow can detect these changes. How to customize this workflow to your needs If you want to customize this workflow, you can: Modify the triggers**: You can change the conditions under which the MCP Client detects changes. For example, you can set it to detect only specific types of events in Google Calendar. Integrate with other services**: You can add more nodes to the workflow to integrate with other services, such as sending notifications to Slack or saving data to a database when a change is detected.
by Jimleuk
This n8n template imports purchase order submissions from Outlook and converts attached purchase order forms in XLSX format into structured output. Data entry jobs with user-submitted XLSX forms are time consuming, incredibly mundane but necessary tasks which in likelihood are inherited and critical to business operation. While we could dream of system overhauls and modernisation, the fact is that change is hard. There is another way however - using n8n and AI! N8N offers an end-to-end solution to parse XLSX form attachments using LLM-powered OCR and send the extracted output to your ERP or otherwise. How it works An Outlook trigger is used to watch for incoming purchase order forms submitted via a shared inbox. The email attachment for the submission is a form in xlsx format - like this one Purchase Order Example - which is imported into the workflow. The 'Extract from File' node is used with the 'code' node to convert the xlsx file to markdown. This is so our LLM can understand it. The Information Extractor node is used to read and extract the relevant purchase order details and line items from the form. A simple validation step is used to check for common errors such as missing PO number or the amounts not matching up. A notification is automated to reply to the buyer if so. Once validation passes, a confirmation is sent to the buyer and the purchase order structured output can be sent along to internal systems. How to use This template only works if you're expecting and receiving forms in XLSX format. These can be invoices, request forms as well as purchase order forms. Update the Outlook nodes with your email or other emails as required. What's next? I've omitted the last steps to send to an ERP or accounting system as this is dependent on your org. Requirements Outlook for Emails Check out how to setup credentials here: https://docs.n8n.io/integrations/builtin/credentials/microsoft OpenAI for LLM document understanding and extraction. Customising the workflow This template should work for other Excel files. Some will be more complicated than others so experiment with different parsers and extraction tools and strategies. Customise the Information Extractor Schema to pull out the specific data you need. For example, capture any notes or comments given by the buyer.
by Ifeoluwa Ajetomobi
This workflow helps you stay updated with daily launches on Product Hunt. It automatically fetches product details (name, tagline, description, and website), checks if the website redirects to another URL, and logs the final information into a Google Sheet. Perfect for indie hackers, product managers, content curators, and anyone tracking daily launches. How It Works Schedule Trigger – Runs the workflow daily. Set Date – Captures today’s date in ISO format for filtering Product Hunt posts. HTTP Request (Product Hunt API) – Retrieves Product Hunt posts for the day using GraphQL. Extract Product Info (Code Node) – Parses the response to pull key details: Name Tagline Description Website URL HTTP Request (URL Check) – Follows each website URL to detect if it redirects. Merge Data – Combines product info with the final destination URL. Google Sheets Node – Appends all processed product info to your sheet. Pre-conditions A valid Product Hunt API token A Google account with access to Google Sheets A Google Sheet already created with the correct columns (see below) Connected Google Sheets and HTTP credentials in n8n Google Sheets Setup Your spreadsheet should include the following columns (in order): Name Tagline Description Original URL Final URL (after redirect) Ensure your Google Sheets node uses the correct Spreadsheet ID and Sheet Name. Setup Instructions Product Hunt API Auth: Replace {{YOUR_PRODUCT_HUNT_API_KEY}} in the HTTP Request headers: { "Authorization": "Bearer {{YOUR_PRODUCT_HUNT_API_KEY}}" } Google Sheets Node: Connect your Google account. Insert your Spreadsheet ID in the settings. Specify the sheet name (e.g., Daily Launches). Use the “Append” operation and map the 5 data fields accordingly. Notes Only fetches the first 10 posts for the day (can be extended). Consider adding Slack, Discord, or Email nodes to notify you of new entries. Useful for building launch databases, research, or content inspiration.
by Yang
Who is this for? This workflow is built for marketers, sales teams, agencies, virtual assistants, and anyone who regularly researches or contacts local businesses. It's ideal for building lead lists, tracking competitors, or creating location-specific outreach campaigns. What problem is this workflow solving? Instead of manually searching Google Maps and copying business info into spreadsheets, this automation pulls structured business data (e.g. restaurants, gyms, service providers) and logs it directly into Google Sheets. It saves hours of work and ensures cleaner, more usable data. What this workflow does The workflow takes a Google Maps search query (like "best restaurants in New York") and sends it to Dumpling AI. It returns a list of places including their name, address, website, phone number, rating, and more. Each result is split into a row and automatically added to a Google Sheet. Setup Dumpling AI Sign up at Dumpling AI Generate your API key In the HTTP Request node, select Header Auth and paste your key in the Authorization field Google Sheets Create a sheet with tab name Leads Add the following column headers to row 1: Name, Address, Phone number, Website, Rating, Price Level, Type, Booking Link, Position Connect your Google Sheets account and link this sheet in the node Customize the Query In the HTTP node, replace the query string (e.g., "best+restaurants+in+New+York") with your own search term Run It Use the manual trigger to test Optionally swap in a Schedule or Webhook node to run it automatically How to customize this workflow to your needs Change the search query to target different cities or business types Use filters to only save leads with a minimum rating or price level Add GPT to summarize listings or qualify leads Swap Google Sheets for Airtable or a CRM system for deeper integration
by SamirLiu
📝 What this workflow does Every morning at 8 a.m., this workflow fetches the latest AI-related articles from both GNews and NewsAPI. It merges up to 40 new articles daily, selects the 15 most relevant ones on AI technology and applications, and uses GPT-4.1 to generate concise summaries in accurate Traditional Chinese (while preserving essential English technical terms). Each summary also includes the article link for easy referral. The compiled digest is then posted to your designated Telegram account or group. 👥 Who is this for? AI enthusiasts, professionals, and anyone interested in artificial intelligence news Individuals and teams wanting a concise daily digest of AI developments in Traditional Chinese Telegram users who prefer automated information delivery 🎯 What problem does this workflow solve? With the rapid evolution of AI technology, it can be overwhelming to keep up with new developments. This workflow addresses information overload by automatically collecting, summarizing, and translating the most important AI news each morning — all delivered conveniently to your chosen Telegram channel or group. ⚙️ Setup 🔑 Add NewsAPI and GNews API Keys Register for accounts on NewsAPI.org and GNews to obtain your API keys. Input your NewsAPI key directly into the Fetch NewsAPI articles node. Input your GNews API key into the Fetch GNews articles node. 🤖 Set up your Telegram Bot Create a Telegram Bot via BotFather and copy the generated Bot Token. In n8n, create Telegram Bot credentials using this token. In the Send summary to Telegram node, enter the chat ID of your target user, group, or channel to receive the messages. 🧠 Configure OpenAI Credentials In n8n, create a new credential using your OpenAI API key. Assign this credential to the GPT-4.1 Model node (or equivalent OpenAI/AI nodes). After completing these steps, your workflow is fully configured to fetch, summarize, and deliver daily AI news to your selected Telegram chat automatically. 🛠️ How to customize this workflow 🔍 Change the topic:** Update the keywords in the NewsAPI and GNews nodes for other subjects (e.g., "blockchain", "quantum computing"). ⏰ Adjust delivery time:** Modify the scheduled trigger to your preferred hour. ✍️ Tweak summary style or language:** Refine the prompt in the AI summarizer node for different tones or translate into other languages as needed. 📦 Dependencies NewsAPI account GNews account Telegram Bot OpenAI API access (for GPT-4.1) or compatible AI model for Langchain agent
by Jimleuk
This n8n demonstrates how to build a simple Youtube MCP server to look up videos on Youtube and download their transcripts for research purposes. Youtube videos are a great source of new and updated information on a variety of cutting edge developments but they''re are not always simple to understand and lengthy videos may take too much time. Using this MCP server, you extract and feed their transcripts for your AI agents which then allows it to breakdown the content into manageble learnings and insights. How it works A MCP server trigger is used and connected to 3 custom workflow tools: Youtube Search, Youtube Transcripts and Usage Reports. Both Youtube tools use an external scraping service called APIFY.com. This is my preference as it's a much simpler interface and there are no rate limits. The Youtube Search fetches 10 results based on the user's query. The Youtube Transcripts downloads the subtitles from one or more given URL. The usage reports pulls in your monthly APIFY.com monthly spending and limits as a way to check your account. How to use This Apify Youtube MCP server allows any compatible MCP client to research YouTube videos for any desired topic. An Apify account is required however to connect and use the service. Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop Alternatively, connect any n8n AI agent with the MCP client tool. Try the following queries in your MCP client: "what is MCP?" "How can I use MCP in n8n?" "How can I use Apify's official MCP server?" Requirements APIFY.com for Youtube Scraping. This is a paid service but there is a $5 free tier which is ample for this template. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow Add as many APIFY.com actors as required for your use-case or users. Consider using Apify's official MCP server for 4000+ available tools. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!
by Samir Saci
Tags: Accessibility, SEO, Blogging, Marketing, Automation, AI, Web Auditing Context Hey! I’m Samir, a Supply Chain Engineer and Data Scientist from Paris, and the founder of LogiGreen Consulting. In my personal blog, I share insights on how to use AI, automation, and data analytics to improve logistics, operations, and digital sustainability practices. > Have you heard about accessibility? In this workflow, I use n8n to improve the quality of alternative texts for images on my personal website. 📬 For business inquiries, you can connect with me on LinkedIn Who is this template for? This workflow is for: Bloggers* and *website owners* who want to *improve accessibility** SEO professionals** looking to boost page performance Web developers* and *product teams** automating web audits What does it do? This n8n workflow: 🔍 Downloads the HTML of a blog or web page 🖼️ Extracts all ` tags and their alt` attributes 📉 Detects missing or too-short alt texts 🤖 Sends those images to GPT-4o (with vision) to generate new alt descriptions 📄 Saves the results into a Google Sheet, updating the alt text when needed How it works Set a page URL using the Set node Download HTML content Extract image src and alt using a Code node Store results in a Google Sheet Filter images with altLength < 50 Send image URL to GPT-4o Update the Google Sheet with the newly generated newAlt text The AI alt texts are concise, descriptive, and accessibility-compliant. What do I need to get started? You’ll need: A Google Sheet to store the audit results An OpenAI account with GPT-4o access Follow the Guide! Follow the sticky notes in the workflow or check my tutorial to configure each node and start using AI to improve the accessibility of your website. 🎥 Watch My Tutorial Notes GPT-generated alt texts are limited to ~125–150 characters for best results Use this to comply with WCAG and improve Google indexing Easily adapt it to audit multiple domains or e-commerce catalogues This workflow was built using n8n version 1.85.4 Submitted: April 21, 2025
by Alex Kim
Automatically convert documents from Google Drive into vector embeddings using OpenAI, LangChain, and PGVector — fully automated through n8n. ⚙️ What It Does This workflow monitors a Google Drive folder for new files, supports multiple file types (PDF, TXT, JSON), and processes them into vector embeddings using OpenAI’s text-embedding-3-small model. These embeddings are stored in a Postgres database using the PGVector extension, making them query-ready for semantic search or RAG-based AI agents. After successful processing, files are moved to a separate “vectorized” folder to avoid duplication. 💡 Use Cases Powering Retrieval-Augmented Generation (RAG) AI agents Semantic search across private documents AI assistant knowledge ingestion Automated document pipelines for indexing or classification 🧠 Workflow Highlights Trigger Options:** Manual or Scheduled (3 AM daily by default) Supported File Types:** PDF, TXT, JSON Embedding Stack:** LangChain Text Splitter, OpenAI Embeddings, PGVector Deduplication:** Files are moved after processing License:** CC BY-SA 4.0 Author:** AlexK1919 🛠 What You’ll Need Google Drive OAuth2** credentials (connected to Search Folder, Download File, and Move File nodes) OpenAI API Key** (used in the Embeddings OpenAI node) Postgres + PGVector** database (connected in the Postgres PGVector Store node) 🔧 Step-by-Step Setup Instructions Create Google OAuth2 credentials in n8n and connect them to all Google Drive nodes. Set your source folder ID in the Search Folder node — this is where incoming files are placed. Set your processed folder ID in the Move File node — files will be moved here after vectorization. Ensure you have a PGVector-enabled Postgres instance and input the table name and collection in the Postgres PGVector Store node. Add your OpenAI credentials to the Embeddings OpenAI node and select text-embedding-3-small. Optional: Activate the Schedule Trigger node to run daily or configure your own schedule. Run manually by triggering When clicking ‘Test workflow’ for on-demand ingestion. 🧩 Customization Tips Want to support more file types or enhance the pipeline? Add new extractors**: Use Extract from File with other formats like DOCX, Markdown, or HTML. Refine logic by file type**: The Switch node routes files to the correct extraction method based on MIME type (application/pdf, text/plain, application/json). Pre-process with OCR**: Add an OCR step before extraction to handle scanned PDFs or images. Add filters**: Enhance the Search Folder or Switch node logic to skip specific files or folders. 📄 License This workflow is available under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. You are free to use, adapt, and share this workflow for non-commercial purposes under the terms of this license. Full license details: https://creativecommons.org/licenses/by-nc-sa/4.0/
by Amjid Ali
Automate Digital Delivery After PayPal Purchase Using n8n A Complete Step-by-Step Guide to Seamless Template Delivery Built by Amjid Ali – SyncBricks Deliver personalized files instantly after PayPal transactions using n8n – without writing a single backend line. 🚀 What This n8n Workflow Does This automation template helps you automatically deliver a digital product (such as an n8n template or JSON file) to customers who pay via PayPal — within seconds. You can: Automatically extract customer info Identify what was purchased Send a clean, branded email with the product file Promote your other courses, books, and tools 📦 Use Case Example Product: AI-Powered Social Media Content Generator & Publisher When a customer buys this product through PayPal, this automation: Listens for a successful payment event Fetches order details via API Sends an HTML email with the template attached Promotes your other offerings with embedded links 🔧 Prerequisites You’ll need: An n8n instance (self-hosted or n8n Cloud) A PayPal developer account PayPal OAuth2 credentials configured in n8n Your product hosted as a downloadable .json file (Oracle, Dropbox, GitHub, etc.) SMTP email credentials in n8n 🧠 Step-by-Step Setup 1. Webhook Trigger Node: Webhook Listens for a POST request from PayPal’s webhook for PAYMENT.CAPTURE.COMPLETED events. 📌 Add the webhook to your PayPal Developer App > Webhooks. 2. Wait Node: Wait Adds a brief delay to ensure the payment is completely processed before continuing. 3. Filter Event Type Node: Switch Processes only when the event is PAYMENT.CAPTURE.COMPLETED. 4. Fetch Order Details Node: HTTP Request Retrieves the order information from PayPal's Orders API. URL format: https://api.paypal.com/v2/checkout/orders/{{ order_id }} 5. Extract Email & Product Info Node: Set Extracts first name, last name, email address, and the purchased item name. 6. Identify Product Purchased Node: Switch Checks if the product is “AI-Powered Social Media Content Generator & Publisher”. 7. Download Workflow File Node: HTTP Request Fetches the hosted workflow JSON from object storage (Oracle in this case). 8. Convert to Downloadable File Node: Code Converts the JSON content into a binary file and attaches it. 9. Send Custom Email Node: Send Email Sends a rich HTML email to the buyer with: Their name The file attachment Product name Helpful resource links: 📘 Mastering n8n Course on Udemy 📖 Step-by-Step Guide (n8n Book) 🎓 n8n Video Tutorials (Free Course) ☁️ Sign up for n8n Cloud – Use code AMJID10 🎥 YouTube Video Walkthrough 📚 Additional Learning Resources 🚀 My Full Automation Suite Explore more and master n8n with these resources: 🎓 Mastering n8n (Full Udemy Course) 📕 Get Your Step-by-Step Guide (n8n Book) 🎥 Get Step-by-Step Tutorials (Video Course) ☁️ Sign up for n8n Cloud 💡 Templates, Tools, and More 📺 YouTube Channel – SyncBricks 🙋 Need Help or Customization? Reach out! Email: amjid@amjidali.com LinkedIn: linkedin.com/in/amjidali Website: syncbricks.com
by Xavier
This workflow creates nested Google Drive folders from a path string (like Projects/Clients/Reports). It automatically handles the necessary folder lookups and creation steps required by Google Drive, then outputs the final folder's ID for immediate use. How it works This workflow streamlines the creation of nested folders in Google Drive: Input: Provide a root_folder_id and a path (e.g., Projects/Clients/Reports) as input. Path Parsing: The workflow splits the path into individual folder names (based on the / separator) Iterative Check & Create: Loops through each part of your path: Searches within the current parent folder (starting with the root_folder_id) for a subfolder matching the name. If found: Retrieves the existing folder's ID to use as the parent for the next iteration. If not found: Creates a new folder with that name inside the current parent folder and uses the new folder's ID as the parent for the next iteration. Output: Returns the Google Drive Folder ID of the very last folder in the specified path (e.g., the ID for Reports in the example above). This ID can then be directly used in subsequent n8n nodes to upload files, create documents, or perform other actions within that specific folder. Set up steps Setting up this workflow requires configuring the connection to Google Drive and knowing where to start creating folders: Connect Google Drive Account: Ensure you have a Google Drive credential configured in your n8n instance. Then link your credentials in the workflow: there are 2 Google Drive nodes that will need to be updated. Identify Starting Folder ID: Determine the Google Drive Folder ID where your nested structure should begin. You can either use the root of your Google Drive or a specific folder: To use the root of Google Drive, simply set root_folder_id to root (also called "My Drive" in the UI) To use a specific folder, open the folder in a webbrowser and look at the URL. The folder ID will be in the last part of the URL: https://drive.google.com/drive/folders/THIS_IS_THE_FOLDER_ID. Prepare Inputs for Execution: When running the workflow (or triggering it), you will need to provide: google_drive_folder_id -> this is the root folder ID you identified in step 2. desired_path -> This is the path you want to create (e.g., Projects/Clients/Reports). Here's an example of how you can call this workflow in your other workflows:
by Luciano Gutierrez
Instagram Auto-Comment Responder with AI Agent Integration Version: 1.1.0 ‧ n8n Version: 1.88.0+ ‧ License: MIT A fully automated workflow for managing and responding to Instagram comments using AI agents. Designed to improve engagement and save time, this system listens for new Instagram comments, verifies and filters them, fetches relevant post data, processes valid messages with a natural language AI, and posts context-aware replies directly on the original post. Key Features 💬 AI-Driven Engagement: Intelligent responses to comments via a GPT-powered agent. ✅ Webhook Verification: Handles Instagram webhook handshake to ensure secure integration. 📦 Data Extraction: Maps incoming payload fields (user ID, username, message text, media ID) for processing. 🚫 Self-Comment Filtering: Automatically skips comments made by the account owner to prevent loops. 📡 Post Data Retrieval: Fetches the media’s id and caption from the Graph API (v22.0) before generating a reply. 🧠 Natural Language Processing: Uses a custom system prompt to maintain brand tone and context. 🔁 Automated Replies: Posts the AI-generated message back to the comment thread using Instagram’s API. 🧩 Modular Architecture: Clear separation of steps via sticky notes and dedicated HTTP Request and Agent nodes. Use Cases Social Media Automation**: Keep followers engaged 24/7 with instant, relevant replies. Community Building**: Maintain a consistent voice and tone across all interactions. Brand Reputation Management**: Ensure no valid comment goes unanswered. AI Customer Support**: Triage simple questions and direct followers to resources or support. Technical Implementation Webhook Verification Node: Webhook + Respond to Webhook Echoes hub.challenge to confirm subscription and secure incoming events. Data Extraction Node: Set Maps payload fields into structured variables: conta.id, usuario.id, usuario.name, usuario.message.id, usuario.message.text, usuario.media.id, endpoint. User Validation Node: Filter Skips processing if conta.id equals usuario.id (self-comments). Post Data Retrieval Node: HTTP Request (Get post data) GET https://graph.instagram.com/v22.0/{{ $json.usuario.media.id }}?fields=id,caption&access_token={{ credentials }} Captures the media’s caption for richer context in replies. AI Response Generation Nodes: AI Agent + OpenRouter Chat Model Uses a detailed system prompt with: Profile persona (expert in AI & automations, friendly tone). Input data (username, comment text, post caption). Filtering logic (spam, praise, questions, vague comments). Returns either the reply text or [IGNORE] for irrelevant content. Posting the Reply Node: HTTP Request (Post comment) POST {{ $json.endpoint }}/{{ $json.usuario.message.id }}/replies with message={{ $json.output }} Sends the AI answer back under the original comment. Instructions for Setup Import Workflow In n8n > Workflows > Import from File, upload the provided .json template. Configure Credentials Instagram Graph API (Header Auth or FacebookGraphApi) with instagram_basic, instagram_manage_comments scopes. OpenRouter/OpenAI API key for AI agent. Customize System Prompt Edit the AI Agent’s prompt to adjust brand tone, language (Brazilian Portuguese), length, or emoji usage. Test & Activate Publish a test comment on an Instagram post. Verify each node’s execution, ensuring the webhook, filter, data extraction, HTTP requests, and AI Agent respond as expected. Extend & Monitor Add sentiment analysis or lead capture nodes as needed. Monitor execution logs for errors or rate-limit events. Tags Social Media • Instagram Automation • Webhook Verification • AI Agent • HTTP Request • Auto Reply • Community Management
by Yang
👥 Who is this for? This workflow is ideal for virtual assistants, researchers, developers, automation specialists, and data analysts who need to regularly extract and organize structured product information (like books) from a website. It’s especially useful for those working with catalog-based websites who want to automate extraction and delivery of clean, sorted data. 🧩 What problem is this solving? Manually copying product listings like book titles and prices from a website into a spreadsheet is slow and repetitive. This automation solves that problem by scraping content using Dumpling AI, extracting the right data using CSS selectors, and formatting it into a clean CSV file that is sent to your email—all triggered automatically when a new URL is added to Google Sheets. ⚙️ What this workflow does This template automates an entire content scraping and delivery process: Watches a Google Sheet for new URLs Scrapes the HTML content of the given webpage using Dumpling AI Uses CSS selectors in the HTML node to extract each book from the page Splits the HTML array into individual items Extracts the book title and price from each HTML block Sorts the books in descending order based on price Converts the sorted data to a CSV file Sends the CSV via email using Gmail 🛠️ Setup Google Sheets Create a sheet titled something like URLs Add your product listing URLs (e.g., http://books.toscrape.com) Connect the Google Sheets trigger node to your sheet Ensure you have proper credentials connected Dumpling AI Create an account at Dumpling AI) - Generate your API key Set the HTTP Method to POST and pass the URL dynamically from the Google Sheet Use Header Auth to include your API key in the request header Make sure "cleaned": "True" is included in the body for optimized HTML output HTML Node The first HTML node extracts the main book container blocks using: .row > li The second HTML node parses out the individual fields: title: h3 > a (via the title attribute) price: .price_color Sort Node Sorts books by price in descending order Note: price is extracted as a string, ensure it's parsable if you plan to use numeric filtering later Convert to CSV The JSON data is passed into a Convert node and transformed into a CSV file Gmail Sends the CSV as an attachment to a designated email 🔄 How to customize this workflow Extract more data**: Add more CSS selectors in the second HTML node to pull fields like author, availability, or product links Switch destinations**: Replace Gmail with Slack, Google Drive, Dropbox, or another platform Adjust sorting**: Sort alphabetically or based on another extracted value Use a different source**: As long as the site structure is consistent, this can scrape any listing-like page Trigger differently**: Use a webhook, form submission, or schedule trigger instead of Google Sheets ⚠️ Dependencies and Notes This workflow uses Dumpling AI to perform the web scraping. This requires an API key and uses credits per request. The HTML node depends on valid CSS selectors. If the site layout changes, the selectors may need to be updated. Ensure you’re not scraping content from websites that prohibit automated scraping.