by Aitor | 1Node
Who is this for? This template is designed for anyone who wants to integrate MCP with their AI Agents using Airtable. Whether you're a developer, a data analyst, or an automation enthusiast, if you're looking to leverage the power of MCP and Airtable in your n8n workflows, this template is for you. What problem is this workflow solving? This template caters to MCP beginners seeking a hands-on example and developers looking to integrate Airtable MCP service. When integrating MCP with Airtable, manually updating AI Agents after changes to Airtable data on the MCP Server is time-consuming and error-prone. This template automates the process, enabling the AI Agent to instantly recognize changes made to Airtable on the MCP Server. In data management, for example, it ensures that record updates or additions in Airtable are automatically detected by the AI Agent. With detailed steps, it simplifies the integration process for all users. What this workflow does This workflow focuses on integrating MCP with Airtable within n8n. Specifically, it allows you to build an MCP Server and Client using Airtable nodes in n8n. Any changes made to the Airtable Base/Table on the MCP Server are automatically recognized by the MCP Client in the workflow. This means that you can make changes to your Airtable (such as adding, deleting, or modifying records) on the MCP Server, and the MCP Client in the n8n workflow will immediately detect these changes without any manual intervention. Setup Requirements An active n8n account. Access to Airtable API. A sample base and rows in Airtable that you can use to test. An API key from your preferred LLM to power the AI agent. Step-by-step guide Create a new workflow in n8n: Log in to your n8n account and create a new workflow. Add Airtable nodes: Search for and add the Airtable nodes to your workflow that you wish the MCP client to have access to. Set up the MCP Server and Client: Use the appropriate nodes in n8n to set up the MCP Server and Client. Connect the Airtable nodes to the MCP nodes as required. Activate and test the workflow: Talk to the chat trigger once all credentials have been updated and table data synced and try adding some rows, deleting or finding and updating cells. How to customize this workflow to your needs If you want to customize this workflow, you can: Modify the triggers:** You can change the conditions under which the MCP Client detects changes. For example, you can set it to detect changes only in specific fields or based on certain record values in Airtable. Integrate with other services:** You can add more nodes to the workflow to integrate with other services, such as sending notifications to Slack or triggering further actions based on the detected Airtable changes. Need help? Feel free to contact us at 1 Node. Get instant access to a library of free resources we created.
by phil
This workflow automates web scraping of Amazon search result pages by retrieving raw HTML, cleaning it to retain only the relevant product elements, and then using an LLM to extract structured product data (name, description, rating, reviews, and price), before saving the results back to Google Sheets. It integrates Google Sheets to supply and collect URLs, BrightData to fetch page HTML, a custom n8n Function node to sanitize the HTML, LangChain (OpenRouter GPT-4) to parse product details, and Google Sheets again to store the output. URL to scape . Result Who Needs Amazon Search Result Scraping? This scraping workflow is ideal for teams and businesses that need to monitor Amazon product listings at scale: E-commerce Analysts** – Track competitor pricing, ratings, and inventory trends. Market Researchers** – Collect data on product popularity and reviews for market analysis. Data Teams** – Automate ingestion of product metadata into BI pipelines or data lakes. Affiliate Marketers** – Keep affiliate catalogs up to date with latest product details and prices. If you need reliable, structured data from Amazon search results delivered directly into your spreadsheets, this workflow saves you hours of manual copy-and-paste. Why Use This Workflow? End-to-End Automation** – From URL list to clean JSON output in Sheets. Robust HTML Cleaning** – Strips scripts, styles, unwanted tags, and noise. Accurate Structured Parsing** – Leverages GPT-4 via LangChain for reliable extraction. Scalable & Repeatable** – Processes thousands of URLs in batches. Step-by-Step: How This Workflow Scrapes Amazon Get URLs from Google Sheets – Reads a list of search result URLs. Loop Over Items – Iterates through each URL in controlled batches. Fetch Raw HTML – Uses BrightData’s Web Unlocker proxy to retrieve the page. Clean HTML – A Function node removes doctype, scripts, styles, head, comments, classes, and non-whitelisted tags, collapsing extra whitespace. Extract with LLM – Passes cleaned HTML into LangChain → GPT-4 to output JSON for each product: name, description, rating, reviews, price Save Results – Appends the JSON fields as columns back into a “results” sheet in Google Sheets. Customization: Tailor to Your Needs Adaptable Sites** – This workflow can be adapted to any e-commerce or other website, for example Walmart or eBay. Whitelist Tags** – Modify the allowedTags array in the Code node to keep additional HTML elements. Schema Changes** – Update the Structured Output Parser schema to include more fields (e.g., availability, SKU). Alternate Data Sink** – Instead of Sheets, route output to a database, CSV file, or webhook. 🔑 Prerequisites Google Sheets Credentials** – OAuth credentials configured in n8n. BrightData API token** – Stored in n8n credentials as BRIGHTDATA_TOKEN. OpenRouter API Key** – Configured for the LangChain node to call GPT-4. n8n Instance** – Self-hosted or cloud with sufficient quota for HTTP requests and LLM calls. 🚀 Installation & Setup Configure Credentials** In n8n, set up Google Sheets OAuth under “Credentials.” Add BrightData token as a new HTTP Request credential. Create an OpenRouter API key credential for the LangChain node. Import the Workflow** Copy the JSON workflow into n8n’s “Import” dialog. Map your Google Sheet IDs and GIDs to the {{WEB_SHEET_ID}}, {{TRACK_SHEET_GID}}, and {{RESULTS_SHEET_GID}} placeholders. Ensure the BRIGHTDATA_TOKEN credential is selected on the HTTP Request node. Test & Run** Add a few Amazon search URLs to your “track” sheet. Execute the workflow and verify product data appears in your “results” sheet. Tweak batch size or parser schema as needed. ⚠ Important API Rate Limits** – Monitor your BrightData and OpenRouter usage to avoid throttling. Amazon’s Terms** – Ensure your scraping complies with Amazon’s policies and legal requirements. Summary This workflow delivers a fully automated, scalable solution to extract structured product data from Amazon search pages directly into Google Sheets—streamlining your competitive analysis and data collection. 🚀 Phil | Inforeole
by Yang
Who is this for? This workflow is perfect for operations teams, accountants, e-commerce businesses, or finance managers who regularly process digital invoices and need to automate data extraction and record-keeping. What problem is this workflow solving? Manually reading invoice PDFs, extracting relevant data, and entering it into spreadsheets is time-consuming and error-prone. This workflow automates that process—watching a Google Drive folder, extracting structured invoice data using Dumpling AI, and saving the results into Google Sheets. What this workflow does Watches a specific Google Drive folder for new invoices. Downloads the uploaded invoice file. Converts the file into a Base64 format. Sends the file to Dumpling AI’s extract-document endpoint with a detailed parsing prompt. Parses Dumpling AI’s JSON response using a Code node. Splits the items array into individual rows using the Split Out node. Appends each invoice item to a preformatted Google Sheet along with the full header metadata (order number, PO, addresses, etc.). Setup Google Drive Setup Create or select a folder in Google Drive and place the folder ID in the trigger node. Make sure your n8n Google Drive credentials are authorized for access. Google Sheets Create a Google Sheet with the following headers: Order number, Document Date, Po_number, Sold to name, Sold to address, Ship to name, Ship to address, Model, Description, Quantity, Unity price, Total price Paste the Sheet ID and sheet name (Sheet1) into the Google Sheets node. Dumpling AI Sign up at Dumpling AI Go to your account settings and generate your API key. Paste this key into the HTTP header of the Dumpling AI request node. The endpoint used is: https://app.dumplingai.com/api/v1/extract-document Prompt (already included) This prompt extracts: order number, document date, PO number, shipping/billing details, and detailed line items (model, quantity, unit price, total). How to customize this workflow to your needs Adjust the Google Sheet fields to fit your invoice structure. Modify the Dumpling AI prompt if your invoices have additional or different data points. Add filtering logic if you want to handle different invoice types differently. Replace Google Sheets with Airtable or a database if preferred. Use a different trigger like an email attachment if invoices come via email.
by Yang
Who is this for? This workflow is perfect for customer support teams, sales departments, or solopreneurs who receive frequent email enquiries and want to automate the initial response process using AI. If you spend too much time answering similar questions, this system helps respond faster and more intelligently—without writing a single line of code. What problem is this workflow solving? Manually responding to repeated customer enquiries slows productivity and increases delay. This workflow classifies if an incoming email is a real enquiry, analyzes the content with a LangChain-powered agent, fetches helpful context using Dumpling AI, and sends a personalized reply using Gmail—all within minutes. What this workflow does Listens for new incoming Gmail messages using the Gmail Trigger node. Classifies whether the email is an enquiry using a GPT-4o classification prompt. Uses a Filter node to continue only if the email was classified as an enquiry. Passes the email content to a LangChain Agent, enhanced with memory, AI tools, and Dumpling AI to search for relevant information. The agent constructs a smart, relevant response, then sends it to the original sender via Gmail. Setup Connect Gmail Use the Gmail Trigger node to connect to the Gmail account that receives enquiries. Make sure Gmail OAuth2 credentials are authenticated. Configure Dumpling AI Agent Sign up at Dumpling AI. Create an agent trained to search your help docs, site content, or FAQs. Copy your Dumpling agent ID and API key. Paste it in the Dumpling AI Agent – Search for Relevant Info HTTP Request node. Set Up LangChain Agent No extra setup needed beyond connecting OpenAI credentials. GPT-4o is used for classification and reply generation. Enable Gmail Reply Node The final Send Email Response via Gmail node will send the AI-generated reply back to the same thread. How to customize this workflow to your needs Change the classification prompt to include other email types like “support”, “complaint”, or “sales”. Add additional logic if you want to CC someone or forward certain types of enquiries. Add a Notion or Google Sheets node to log the conversation for analytics. Replace Gmail with Outlook or another email provider by switching the nodes. Improve context by adding more AI tools like database queries or preloaded FAQs.
by Airtop
Define Your ICP from Customer LinkedIn Profiles Use Case This automation helps marketing and sales teams define their Ideal Customer Profile (ICP) using real LinkedIn profiles of current high-fit customers. By enriching and analyzing profile data, it generates a clear ICP definition and scoring methodology for future targeting. What This Automation Does This automation analyzes LinkedIn profiles of your existing customers and produces: A structured ICP definition A scoring model to evaluate future prospects A Google Boolean search string to find similar prospects Input: LinkedIn profile URLs of existing high-fit customers (e.g., https://www.linkedin.com/in/amirashkenazi/) Output: A Google Doc containing the ICP analysis and scoring methodology How It Works Trigger: Waits for a chat message containing one or more LinkedIn profile URLs. AI Agent: Parses and processes the URLs. Airtop Data Enrichment: Uses Airtop to extract structured information from each LinkedIn profile (e.g., job title, company, experience, skills). Memory: Maintains state between inputs for consistent analysis. LLM Analysis: Uses Claude 3.7 Sonnet to synthesize enriched data into a meaningful ICP. Google Docs: Automatically creates a new doc with a timestamped title and appends the ICP definition. Setup Requirements Airtop Profile connected to LinkedIn, Insert the profile name in the Airtop Tool Airtop API credentials. Get it free here If you choose to activate saving the profiles in Google Docs you will need OAuth2 credentials (or just copy the ICP definition from the chat) Next Steps Use the ICP for Scoring**: Feed new LinkedIn profiles through the same Airtop enrichment and use the scoring function to evaluate fit. Automate Target Discovery**: Plug the Boolean search output into LinkedIn, Google, or People Data Labs for ICP-matching lead generation. Refine Continuously**: Repeat the workflow as your customer base grows or segments evolve. Read more about how to Define ICP from Customer Examples
by Automate With Marc
✉️ Telegram Email Agent with GPT + Gmail Category: Messaging / AI Agent Level: Beginner-Friendly Tags: Telegram, Email Automation, AI Agent, Gmail, GPT Model Watch Step-by-step video guide here: https://www.youtube.com/watch?v=nyI40s9QOuw&t=420s&pp=0gcJCb4JAYcqIYzv 🤖 What This Workflow Does This workflow turns your Telegram bot into a personal email assistant powered by AI. With just a message on Telegram, users can: Send an email via Gmail Automatically generate the email content using OpenAI Models. Get confirmation or responses directly in Telegram It's like ChatGPT meets Gmail, inside your Telegram chat. 🔧 How It Works Telegram Trigger – Listens for incoming messages from your bot. AI Agent – Processes the input using OpenAI Model and converts it into structured email content (To, Subject, Body). Memory Node – Stores short-term context per user (via chat ID), so the agent can hold simple conversations. Gmail Node – Sends the generated email using your Gmail account. Telegram Node – Replies to the user confirming the output or status. 🧠 Why This is Useful Ever wanted to send an email while on the go, without typing the whole thing out in Gmail? This is a fast, intuitive, and AI-powered way to: Dictate or draft emails from anywhere Create an AI-powered virtual assistant via Telegram Integrate n8n's Langchain Agent with real-world productivity use cases 🪜 Setup Instructions Connect your Telegram bot via BotFather and add the credentials in n8n. Set up your OpenAI API key (GPT-4o-mini recommended). Add your Gmail OAuth credentials. Activate the workflow and start messaging your bot!
by Yang
Who is this for? This template is designed for content creators, marketing teams, educators, or media managers who want to repurpose video content into written blog posts with visuals. It's ideal for anyone looking to automate the process of transforming YouTube videos into professional blog articles and custom images. What problem is this workflow solving? Creating written content from video material is time-consuming and manual. This workflow solves that by automating the entire pipeline: from detecting new YouTube video uploads to transcribing the audio, turning it into an engaging blog post, generating a matching visual, and saving both in Airtable. It saves hours of work while keeping your blog or social feed active and consistent. What this workflow does This automation listens for new YouTube videos added to a Google Drive folder, extracts the full transcript using Dumpling AI, and sends it to GPT-4o to generate a blog post and image prompt. Dumpling AI then turns the prompt into a 16:9 visual. The blog and visual are saved into Airtable for easy publishing or curation. Setup Google Drive Trigger Create a folder in Google Drive and upload your YouTube videos there. Link this folder in the "Watch Folder for New YouTube Videos" node. Enable polling every minute or adjust as needed. Download & Prepare the Video The video is downloaded and converted into base64 format by the next two nodes: Download Video File and Convert Downloaded Video to Base64. Transcription with Dumpling AI The base64 video is sent to Dumpling AI’s extract-video endpoint. You must have a Dumpling AI account and an API key with access to this endpoint: Dumpling AI Docs Generate Blog Content with GPT-4o GPT-4o takes the transcript and generates: A human-like blog post A descriptive prompt for AI image generation Make sure your OpenAI credentials are configured. Generate the Visual The prompt is passed to Dumpling AI’s generate-ai-image endpoint using model FLUX.1-pro. The result is a clean 1024x576 image. Save to Airtable Blog content is stored under the Content field in Airtable. The image prompt is also added to the Attachments column as a visual reference. Ensure Airtable base and table are preconfigured with the correct field names. How to customize this workflow to your needs Change the GPT prompt to alter the tone or format of the blog post (e.g., add bullet points or SEO tags). Modify the Dumpling AI prompt to generate different image styles. Add a scheduler or webhook trigger to run at different intervals or through other integrations. Connect this output to Ghost, Notion, or your CMS using additional nodes. 🧠 Sticky Note Summary Part 1: Transcription & Blog Prompt Watches a Google Drive folder for new video uploads. Downloads and encodes the video. Transcribes full audio with Dumpling AI. GPT-4o writes a blog post and descriptive image prompt. Part 2: Image Generation & Airtable Save Dumpling AI generates a visual from the image prompt. Blog content is saved to Airtable. The image prompt is patched into the Attachments field in the same record. ✅ Use this if you want to automate repurposing YouTube videos into blog content with zero manual work.
by Billy Christi
Who is this for? This workflow is perfect for: Companies that manage invoices through Google Drive Business owners who want to minimize manual data entry and maximize accuracy Accounting teams and finance departments seeking to automate invoice processing What problem is this workflow solving? Processing invoices manually is time-consuming, error-prone, and inconsistent. This workflow solves those issues by: Automating invoice processing** from detection to data extraction to storage Improving accuracy** by using AI to extract key invoice data fields reliably Reducing human workload** while maintaining compliance and consistency What this workflow does This workflow creates a fully automated invoice processing system by: Monitoring a Google Drive folder for new PDF invoices in real time Downloading the PDF files and extracting their content using OCR technology Using AI (OpenAI) to parse and extract key invoice fields such as invoice number, date, total amount, vendor name, itemized details, tax, and category Validating the extracted data to ensure compliance with a structured JSON schema Storing structured data in Google Sheets for easy access, review, and reporting Key Features: AI-powered extraction handles both text-based and scanned PDF invoices Provides a structured, searchable invoice database in Google Sheets Configured to run as frequently as the user needs, ensuring timely processing. Setup Copy the Google Sheet template here: 👉 PDF Invoice Parser – Google Sheet Template Connect your Google Drive account to the Drive Trigger and File Download nodes Add your OpenAI API key in the AI Parser node Link the Google Sheet in the final storage node Drop a test invoice PDF into the monitored Drive folder Required Credentials: OpenAI API Key** Google Drive Credentials** Google Sheets Credentials** How to customize this workflow to your needs Modify the polling interval** (default: every minute) for higher/lower frequency. Integrate with your accounting software** by adding nodes (e.g., QuickBooks, Xero). Use alternative LLM** such as Gemini, Claude.
by ibrhdotme
Learning something new? Endlessly searching to find the best resources? This workflow finds top community-recommended learning resources on any topic from Hacker News, delivered to your inbox. How it works User submits a topic they want to learn via a simple form. The workflow searches for relevant "Ask HN" posts on Hacker News and extracts top-level comments. An LLM analyzes the comments and identifies the best learning resources. A personalized email is sent to the user with a Markdown formatted list of top recommendations, categorized by resource type (e.g., book, course, article) and difficulty level. Set up steps Add your Google Gemini API credentials. You'll need to create a project and enable the Generative Language API. Add your SMTP credentials for sending emails. Customize the Form and email subject (optional) Activate the workflow Screenshots for Workflow, Form and Email Built on Day-03 as part of the #100DaysOfAgenticAi Fork it, tweak it, have fun!
by NovaNode
Who is this for? This template is designed for internal support teams, product specialists, and knowledge managers in technology companies who want to automate ingestion of product documentation and enable AI-driven, retrieval-augmented question answering. What problem is this workflow solving? Support agents often spend too much time manually searching through lengthy documentation, leading to inconsistent or delayed answers. This solution automates importing, chunking, and indexing product manuals, then uses retrieval-augmented generation (RAG) to answer user queries accurately and quickly with AI. What these workflows do Workflow 1: Document Ingestion & Indexing Manually triggered to import product documentation from Google Docs. Automatically splits large documents into chunks for efficient searching. Generates vector embeddings for each chunk using OpenAI embeddings. Inserts the embedded chunks and metadata into a MongoDB Atlas vector store, enabling fast semantic search. Workflow 2: AI-Powered Query & Response Listens for incoming user questions (can be extended to webhook). Converts questions to vector embeddings and performs similarity search on MongoDB vector store. Uses OpenAI’s GPT-4o-mini model with retrieval-augmented generation to produce direct, context-aware answers. Maintains short-term conversation context using a memory buffer node. Setup Setting up vector embeddings Authenticate Google Docs and connect your Google Docs URL containing the product documentation you want to index. Authenticate MongoDB Atlas and connect the collection where you want to store the vector embeddings. Create a search index on this collection to support vector similarity queries. Ensure the index name matches the one configured in n8n (data_index). See the example MongoDB search index template below for reference. Setting up chat Configure the AI system prompt in the “Knowledge Base Agent” node to reflect your company’s tone, answering style, and any business rules. Update the workflow description and instructions to help users understand the chat’s purpose and capabilities. Connect the MongoDB collection used for vector search in the chat workflow and update the vector search index if needed to match your setup. Make sure Both MongoDB nodes (in ingestion and chat workflows) are connected to the same collection, with: An embedding field storing vector data, Relevant metadata fields (e.g., document ID, source), and The same vector index name configured (e.g., data_index). Search Index Example: { "mappings": { "dynamic": false, "fields": { "_id": { "type": "string" }, "text": { "type": "string" }, "embedding": { "type": "knnVector", "dimensions": 1536, "similarity": "cosine" }, "source": { "type": "string" }, "doc_id": { "type": "string" } } } }
by Yang
👤 Who is this for? This workflow is ideal for social media managers, personal brand strategists, ghostwriters, and founders who want to post regularly on LinkedIn without spending hours writing from scratch. It’s also useful for marketing agencies and assistants looking to automate consistent post creation using curated articles as source material. 🧩 What problem does this workflow solve? Manually reading multiple articles, extracting key insights, and writing a clean, professional LinkedIn post is a time-consuming process. This workflow automates everything: from pulling topics, finding related articles, summarizing them using AI, and even generating a matching image to accompany the post. It ensures faster content turnaround, more consistency, and less manual effort. 🔁 What this workflow does This workflow starts manually and retrieves one topic marked as “To do” from a Google Sheet. That topic is used as a search term for Dumpling AI’s search endpoint, which scrapes and returns the top three article contents related to the topic. These articles are sent to a LangChain agent powered by GPT-4o, which analyzes and summarizes the content into a LinkedIn post in a friendly, insightful tone. It also generates an image prompt for the post. After generating the post and image prompt, the data is extracted using a Set node. The prompt is sent to Dumpling AI’s image generation endpoint, which returns an image URL. Finally, the post text, image prompt, image URL, and status update (“created”) are saved back to the original row in Google Sheets. 🛠️ Workflow Breakdown Manual Trigger – Starts the automation. Google Sheets (Get Topic) – Searches for the first row in your content pipeline sheet where the “status” is “To do”. HTTP Request (Dumpling AI Search) – Uses the topic as a search query to pull 3 article contents using Dumpling AI’s API. Set LangChain GPT Model – Defines GPT-4o as the LLM for the LangChain Agent. LangChain Agent (Summarize & Generate) – Summarizes all 3 articles and generates a LinkedIn post and a related image prompt. Set (Extract Data) – Extracts postText and imagePrompt from the LangChain agent output. HTTP Request (Dumpling Image Gen) – Sends imagePrompt to Dumpling AI’s image generation endpoint. Update Google Sheets – Writes the post, image prompt, and image URL back to the sheet and changes the row status to “created”. ⚙️ Setup Instructions Dumpling AI Sign up at Dumpling AI Get your API key and connect it in the HTTP Request nodes (Search and Image endpoints) Use the /search endpoint to retrieve article content Use the /generate-image endpoint to create the image Google Sheets Create a spreadsheet with columns: topic, status, postText, imagePrompt, imageURL Add sample topics and set their status to To do LangChain (GPT-4o) Connect your OpenAI credentials to n8n Make sure GPT-4o is available in your OpenAI account Use the LangChain node to process multi-input summarization and generate a social media caption Customize the Prompt (Optional) Adjust the Set node to tweak the input format sent to the LangChain agent Add constraints like tone, hashtags, or emojis to fit your brand style 🧠 How to Customize This Workflow Change the content source (RSS feed, Notion DB, etc.) instead of Google Sheets Add a scheduler node to run this automatically every morning or weekly Use Airtable instead of Google Sheets for more control and filtering Send the final post to LinkedIn using the Buffer or LinkedIn API Add a Telegram or Slack notification when new content is ready for approval
by Lukas Kunhardt
Who is this for? This template is for any website owner, digital agency, or compliance officer operating within the European Union. It's designed for users who need to comply with the upcoming European Accessibility Act (EAA) but may not have deep technical or legal expertise. Disclaimer This workflow uses an npm package called "cheerio" to work with the specified URLs HTML code. Installing packages is only possible in self hosting. What problem is this workflow solving? / Use Case Starting June 28, 2025, the European Accessibility Act (EAA) mandates that most websites offering products or services in the EU must be accessible and publish a formal Accessibility Statement. Manually creating this legal document is complex, requiring both a technical site analysis and knowledge of specific legal requirements. This workflow automates the generation of a compliant first draft, saving significant time and effort. What this workflow does After you input your details (like website URL and API key) in a central configuration node, this workflow automatically: Scans your live website for accessibility issues using the powerful WAVE API. Processes the scan results to identify the main problem areas. Instructs a Google Gemini AI agent with a specialized legal prompt based on the European Accessibility Act. Generates a formal Accessibility Statement in your desired language. Saves the statement as an .html file and sends it to you as an email attachment. Setup This workflow is designed for a quick setup: Configure All Variables: Click the 'CHANGE THESE: dependencies' node. This is your central control panel. Fill in all the values, including your WAVE API Key, the URL to analyze, company details, and desired output language. Set Up Credentials: You will need to connect your Google accounts for the workflow to run. Gemini: Click the 'gemini 2.5 pro' node, click the gear icon (⚙️) next to the "Credential" field, and connect your Google Gemini API credentials. Gmail: Click the 'Send report by email' node and connect your Gmail account to allow sending the final report. Activate & Execute: Make sure the workflow is active in the top-right corner, then click 'Execute Workflow' to run your first analysis. How to customize this workflow to your needs This template is a great starting point for any EU country. Here's how to adapt it: Localize for Your Country (Important!):* The generated statement contains a placeholder for the "Enforcement Procedure". You *must* edit the prompt in the *'Accessibility Statement Generator'** node to replace this placeholder with the name and link to your specific country's official enforcement body. Change the AI:** Swap the Google Gemini node for any other AI model, like OpenAI or Anthropic Claude, by replacing the node and connecting it to the agent. Change the Trigger:* Replace the *'When clicking ‘Execute workflow’'** node with a Form Trigger or Webhook Trigger to run this workflow based on external inputs, for example, to offer this analysis as a service to your clients.