by Cheng Siong Chin
Introduction Exams create significant stress for students. This workflow automates syllabus analysis and predicts exam trends using AI, helping educators and students better prepare for GCE 'O' Level Mathematics in Singapore. How It Works Trigger → Fetch Syllabus → Extract & Prepare Data → Load History → AI Analyze → Parse → Format → Convert → Publish → Notify Workflow Template Manual Trigger → Fetch O-Level Math Syllabus → Extract Syllabus Text → Prepare Analysis Data → Load Historical Context → AI Analysis Agent → Parse AI Output → Format Report → Convert to HTML → Publish to WordPress → Send Slack Summary Data Collection & AI Processing HTTP retrieves O-Level Math syllabus from SEAB and extracts text. Loads 3-5 years exam history. OpenRouter compares syllabus vs trends, predicts topics with confidence scores. Report Generation & Publishing Formats AI insights to Markdown (topics, trends, recommendations), converts to HTML. Auto-publishes to WordPress and sends Slack summary with report link. Workflow Steps Fetch & extract syllabus from SEAB site Load historical exam content AI analyzes syllabus + trends via OpenRouter model Parse and format AI output to Markdown/HTML Auto-publish report to WordPress and Slack Setup Instructions Connect HTTP node to SEAB syllabus URL Configure OpenRouter AI model with API key Set WordPress and Slack credentials for publishing Prerequisites OpenRouter account, WordPress API access, Slack webhook, SEAB syllabus link. Use Cases Predict 2025 GCE Math topics, generate AI insights, publish summaries for educators. Customization Adapt for other subjects or boards by changing syllabus source and analysis prompt. Benefits Enables fast, data-driven exam forecasting and automated report publication.
by Richard Besier
📤 Search Products from Facebook Ads on Amazon Once connected, this automation automatically scrapes Facebook ads from a specific Facebook Ad Library URL and searches for that same product on Amazon. Can be useful for Amazon FBA or dropshipping. 🔨 Setup This automation workflow is connected with two Apify scrapers. Make sure to connect the two scrapers mentioned in the blue and orange box, with their specific API endpoints. 👋 Need Help? If you need further help, or want a specific automation to be built for you, feel free to contact me via richard@advetica-systems.com.
by DIGITAL BIZ TECH
AI Website Scraper & Company Intelligence Description This workflow automates the process of transforming any website URL into a structured, intelligent company profile. It's triggered by a form, allowing a user to submit a website and choose between a "basic" or "deep" scrape. The workflow extracts key information (mission, services, contacts, SEO keywords), stores it in a structured Supabase database, and archives a full JSON backup to Google Drive. It also features a secondary AI agent that automatically finds and saves competitors for each company, building a rich, interconnected database of company intelligence. Quick Implementation Steps Import the Workflow: Import the provided JSON file into your n8n instance. Install Custom Community Node: You must install the community node from: 👉 https://www.npmjs.com/package/n8n-nodes-crawl-and-scrape FIRECRAWL N8N Documentation 👉 https://docs.firecrawl.dev/developer-guides/workflow-automation/n8n Install Additional Nodes: n8n-nodes-crawl-and-scrape and n8n-nodes-mcp fire crawl mcp . Set up Credentials: Create credentials in n8n for FIRE CRAWL API,Supabase, Mistral AI, and Google Drive. Configure API Key (CRITICAL): Open the Web Search tool node. Go to Parameters → Headers and replace the hardcoded Tavily AI API key with your own. Configure Supabase Nodes: Assign your Supabase credential to all Supabase nodes. Ensure table names (e.g., companies, competitors) match your schema. Configure Google Drive Nodes: Assign your Google Drive credential to the Google Drive2 and save to Google Drive1 nodes. Select the correct Folder ID. Activate Workflow: Turn on the workflow and open the Webhook URL in the “On form submission” node to access the form. What It Does Form Trigger Captures user input: “Website URL” and “Scraping Type” (basic or deep). Scraping Router A Switch node routes the flow: Deep Scraping →** AI-based MCP Firecrawler agent. Basic Scraping →** Crawlee node. Deep Scraping (Firecrawl AI Agent) Uses Firecrawl and Tavily Web Search. Extracts a detailed JSON profile: mission, services, contacts, SEO keywords, etc. Basic Scraping (Crawlee) Uses Crawl and Scrape node to collect raw text. A Mistral-based AI extractor structures the data into JSON. Data Storage Stores structured data in Supabase tables (companies, company_basicprofiles). Archives a full JSON backup to Google Drive. Automated Competitor Analysis Runs after a deep scrape. Uses Tavily web search to find competitors (e.g., from Crunchbase). Saves competitor data to Supabase, linked by company_id. Who's It For Sales & Marketing Teams:** Enrich leads with deep company info. Market Researchers:** Build structured, searchable company databases. B2B Data Providers:** Automate company intelligence collection. Developers:** Use as a base for RAG or enrichment pipelines. Requirements n8n instance** (self-hosted or cloud) Supabase Account:** With tables like companies, competitors, social_links, etc. Mistral AI API Key** Google Drive Credentials** Tavily AI API Key** (Optional) Custom Nodes: n8n-nodes-crawl-and-scrape How It Works Flow Summary Form Trigger: Captures “Website URL” and “Scraping Type”. Switch Node: deep → MCP Firecrawler (AI Agent). basic → Crawl and Scrape node. Scraping & Extraction: Deep path: Firecrawler → JSON structure. Basic path: Crawlee → Mistral extractor → JSON. Storage: Save JSON to Supabase. Archive in Google Drive. Competitor Analysis (Deep Only): Finds competitors via Tavily. Saves to Supabase competitors table. End: Finishes with a No Operation node. How To Set Up Import workflow JSON. Install community nodes (especially n8n-nodes-crawl-and-scrape from npm). Configure credentials (Supabase, Mistral AI, Google Drive). Add your Tavily API key. Connect Supabase and Drive nodes properly. Fix disconnected “basic” path if needed. Activate workflow. Test via the webhook form URL. How To Customize Change LLMs:** Swap Mistral for OpenAI or Claude. Edit Scraper Prompts:** Modify system prompts in AI agent nodes. Change Extraction Schema:** Update JSON Schema in extractor nodes. Fix Relational Tables:** Add Items node before Supabase inserts for arrays (social links, keywords). Enhance Automation:** Add email/slack notifications, or replace form trigger with a Google Sheets trigger. Add-ons Automated Trigger:** Run on new sheet rows. Notifications:** Email or Slack alerts after completion. RAG Integration:** Use the Supabase database as a chatbot knowledge source. Use Case Examples Sales Lead Enrichment:** Instantly get company + competitor data from a URL. Market Research:** Collect and compare companies in a niche. B2B Database Creation:** Build a proprietary company dataset. WORKFLOW IMAGE Troubleshooting Guide | Issue | Possible Cause | Solution | |-------|----------------|-----------| | Form Trigger 404 | Workflow not active | Activate the workflow | | Web Search Tool fails | Missing Tavily API key | Replace the placeholder key | | FIRECRAWLER / find competitor fails | Missing MCP node | Install n8n-nodes-mcp | | Basic scrape does nothing | Switch node path disconnected | Reconnect “basic” output | | Supabase node error | Wrong table/column names | Match schema exactly | Need Help or More Workflows? Want to customize this workflow for your business or integrate it with your existing tools? Our team at Digital Biz Tech can tailor it precisely to your use case from automation logic to AI-powered enhancements. Contact: shilpa.raju@digitalbiz.tech For more such offerings, visit us: https://www.digitalbiz.tech
by Davide
🤹🤖 This workflow (AI Document Generator with Anthropic Agent Skills and Uploading to Google Drive) automates the process of generating, downloading, and storing professionally formatted files (PDF, DOCX, PPTX, XLSX) using the Anthropic Claude API and Google Drive. This workflow connects user prompts with the Anthropic API to generate professional documents in multiple formats, automatically retrieves and uploads them to Google Drive — providing a complete AI-powered document automation system. Key Advantages ✅ Full Automation** From user input to file delivery, the entire pipeline — creation, extraction, download, and upload — runs without manual intervention. ✅ Multi-Format Support** Handles four major business document types: PPTX (Presentations) PDF (Reports) DOCX (Documents) XLSX (Spreadsheets) ✅ Professional Output** Each format includes tailored Claude system prompts with detailed formatting and design principles: Layout structure Typography Visual hierarchy Consistency and readability This ensures that every file produced follows professional standards. ✅ Easy Customization** You can modify the prompt templates or add new Skills using the “Get All Skills” node. The form and switch logic make it simple to extend with additional file types or workflows. ✅ Seamless Cloud Integration** Generated files are automatically uploaded to a Google Drive folder, enabling: Centralized storage Easy sharing and access Automatic organization ✅ Reusable and Scalable** This workflow can be used as a foundation for: Automated report generation Client deliverables Internal documentation systems AI-driven content creation pipelines How it Works This n8n workflow enables users to create professional documents using Anthropic's Claude AI and automatically save them to Google Drive. The process works as follows: Form Trigger: The workflow starts with a web form where users submit a prompt and select their desired file type (PPTX, PDF, DOCX, or XLSX). Document Type Routing: A switch node routes the request based on the selected file type to the appropriate document creation node. AI Document Generation: Each document type has a dedicated HTTP Request node that calls Anthropic's Messages API with: Specific system prompts tailored for each document type (PowerPoint, PDF, Word, or Excel) The user's input prompt Appropriate Anthropic skills (pptx, pdf, docx, xlsx) for specialized document creation Code execution capabilities for complex formatting File ID Extraction: Custom JavaScript code nodes extract the generated file ID from Anthropic's response using recursive search algorithms to handle nested response structures. File Download: HTTP Request nodes download the actual file content from Anthropic's Files API using the extracted file ID. Cloud Storage: Finally, the downloaded files are automatically uploaded to a specified Google Drive folder, organized and ready for use. Set Up Steps API Configuration: Set up HTTP Header authentication with Anthropic API Add x-api-key header with your Anthropic API key Configure required headers: anthropic-version and anthropic-beta Google Drive Integration: Connect Google Drive OAuth2 credentials Specify the target folder ID where documents will be uploaded Ensure proper permissions for file upload operations Custom Skills (Optional): Use the "Get All Skills" node to retrieve available custom skills Update skill_id fields in JSON bodies if using custom Anthropic skills Modify the form dropdown to include custom skill options if needed Form Configuration: The form is pre-configured with prompt field and file type selection No additional setup required for basic functionality Execution: Activate the workflow Access the form trigger URL Submit prompts and select desired output formats Generated files will automatically appear in the specified Google Drive folder The workflow handles the entire process from AI-powered document creation to cloud storage, providing a seamless automated solution for professional document generation. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Eddy Medina
What does this workflow do? This workflow exports the names of all Dialogflow intents from your agent, together with their priority levels, directly into a Google Sheets spreadsheet. It is triggered via Telegram and includes visual indicators (emojis) for priority levels. 📜 Overview 🔔 Activation**: Triggered when a validated user sends the keyword (e.g. "backup") via Telegram. 📥 Data Retrieval**: Fetches all intents of the specified Dialogflow agent using the Dialogflow API. ⚙️ Processing**: Transforms each intent into an n8n-compatible item. Extracts the displayName and priority of each intent. Assigns an emoji and descriptive label based on priority tier: 🔴 Highest, 🟠 High, 🔵 Normal, 🟢 Low, 🚫 Ignore. 📑 Storage**: Appends each intent (name, priority number, emoji, and description), along with current date and time, to a Google Sheets document. 📩 Notification**: Sends a single confirmation message to the Telegram user once insertion is complete (using Execute Once). 🛠️ How to install and configure Import the workflow: Upload the .json into your n8n instance. Connect Telegram: Add your Telegram bot credentials and configure the node Validación de usuario por ID with your Telegram ID. Configure Dialogflow: Authenticate using a Google Service Account API Credential. Then, in the Obtiene datos de los intents node, replace the example project ID (TU_PROJECT_ID) with your actual Dialogflow agent's project ID. Connect Google Sheets: Authorize Google Sheets via OAuth2 and select your destination document/sheet in the node Añadir fila en la hoja. Customize trigger keyword: Adjust the command text (default "backup") if needed. Activate workflow: Ensure the webhook is correctly set up in Telegram before enabling the workflow. 👥 Who is this for? 🤖 Bot administrators who need quick backups of Dialogflow intent names. 🌐 Teams managing multilingual or multi-intent agents wanting priority oversight. 💻 Development teams needing an automated way to audit or version intent configurations regularly. 💡 Use Cases ⚙️ Backup intents periodically to monitor changes over time. 📊 Visualize priority assignment in a spreadsheet for analysis or team discussion. 📖 Document conversational structure for onboarding or knowledge transfer.
by Fahmi Fahreza
Analyze Trustpilot & Sitejabber sentiment with Decodo + Gemini to Sheets Sign up for Decodo HERE for Discount This template scrapes public reviews from Trustpilot and Sitejabber with a Decodo tool, converts findings into a flat, spreadsheet-ready JSON, generates a concise sentiment summary with Gemini, and appends everything to Google Sheets. It’s ideal for reputation snapshots, competitive analysis, or lightweight BI pipelines that need structured data and a quick narrative. Who’s it for? Marketing teams, growth analysts, founders, and agencies who need repeatable review collection and sentiment summaries without writing custom scrapers or manual copy/paste. How it works A Form Trigger collects the Business Name or URL. Set (Config Variables) stores business_name, spreadsheet_id, and sheet_id. The Agent orchestrates the Decodo tool and enforces a strict JSON schema with at most 10 reviews per source. Gemini writes a succinct summary and recommendations, noting missing sources with: “There’s no data in this website.” A Merge node combines JSON fields with the narrative. Google Sheets appends a row. How to set up Add Google Sheets, Gemini, and Decodo credentials in Credential Manager. Replace (YOUR_SPREADSHEET_ID) and (YOUR_SHEET_ID) in Set: Config Variables. In Google Sheets, select Define below and map each column explicitly. Keep the parser and agent connections intact to guarantee flat JSON. Activate, open the form URL, submit a business, and verify the appended row.
by Pixcels Themes
Who’s it for This template is ideal for recruiters, founders, sales teams, and lead-generation specialists who want to quickly collect LinkedIn profiles based on role, industry, and region. It is perfect for users who want profile lists for outreach, research, hiring, or market analysis without manually searching LinkedIn. What it does / How it works This workflow begins with a web form where you enter three inputs: position, industry, and region. Once the form is submitted, the workflow performs a Google Custom Search query restricted to LinkedIn profile URLs. The results are processed to extract structured profile information such as: Name Job title (cleaned using custom logic) LinkedIn profile link Description / bio snippet Profile image URL The workflow automatically handles pagination by detecting whether more results are available and continues fetching until the limit is reached. All extracted profiles are appended or updated in a Google Sheet so you always maintain an organized and deduplicated list. Requirements Google Sheets OAuth2 credentials Google Custom Search API key Google CSE (Custom Search Engine) ID A Google Sheet with the required columns (name, title, profile link, description, image link, searched position, searched industry, searched region) How to set up Connect your Google Sheets credentials. Add your Custom Search API key and CSE ID inside the HTTP Request node. Select your target Google Sheet in the “Append or update row in sheet” node. Open the form URL and submit your position, industry, and region. Run the workflow to begin scraping profiles. How to customize the workflow Modify search query structure for niche industries Add enrichment tools (Hunter.io, Clearbit, People Data) Expand pagination limit beyond the default Add filters to remove non-relevant results Output data to CRM tools like HubSpot, Notion, Airtable, or Sheets
by Alexandra Spalato
Scrape Google Maps leads and find emails with Apify and Anymailfinder Short Description This workflow automates lead generation by scraping business data from Google Maps using Apify, enriching it with verified email addresses via Anymailfinder, and storing the results in a NocoDB database. It's designed to prevent duplicates by checking against existing records before saving new leads. Key Features Automated Scraping**: Kicks off a Google Maps search based on your query, city, and country. Email Enrichment**: For businesses with a website, it automatically finds professional email addresses. Data Cleaning**: Cleans website URLs to extract the root domain, ignoring social media links. Duplicate Prevention**: Checks against existing entries in NocoDB using the Google placeId to avoid adding the same lead twice. Structured Storage**: Saves enriched lead data into a structured NocoDB database. Batch Processing**: Efficiently handles and loops through all scraped results. Who This Workflow Is For Sales Teams** looking for a source of local business leads. Marketing Agencies** building outreach campaigns for local clients. Business Developers** prospecting for new partnerships. Freelancers** seeking clients in specific geographical areas. How It Works Trigger: The workflow starts when you submit the initial form with a business type (e.g., "plumber"), a city, a country code, and the number of results you want. Scrape Google Maps: It sends the query to Apify to scrape Google Maps for matching businesses. Process Leads: The workflow loops through each result one by one. Clean Data: It extracts the main website domain from the URL provided by Google Maps. Check for Duplicates: It queries your NocoDB database to see if the business (placeId) has already been saved. If so, it skips to the next lead. Find Emails: If a valid website domain exists, it uses Anymailfinder to find associated email addresses. Store Lead: The final data, including the business name, address, phone, website, and any found emails, is saved as a new row in your NocoDB table. Setup Requirements Required Credentials Apify API Key**: To use the Google Maps scraping actor. Anymailfinder API Key**: For email lookup. NocoDB API Token**: To connect to your database for storing and checking leads. Database Structure You need to create a table in your NocoDB instance with the following columns. The names should match exactly. Table: leads (or your preferred name) title (SingleLineText) website (Url) phone (PhoneNumber) email (Email) email_validation (SingleLineText) address (LongText) neighborhood (SingleLineText) rating (Number) categories (LongText) city (SingleLineText) country (SingleLineText) postal code (SingleLineText) domain (Url) placeId (SingleLineText) - Important for duplicate checking date (Date) Customization Options Change Trigger**: Replace the manual Form Trigger with a Schedule Trigger to run searches automatically or an HTTP Request node to start it from another application. Modify Scraper Parameters**: In the "Scrape Google Maps" node, you can adjust the Apify actor's JSON input to change language, include reviews, or customize other advanced settings. Use a Different Database**: Replace the NocoDB nodes with nodes for Google Sheets, Baserow, Airtable, or any SQL database to store your leads. Installation Instructions Import the workflow into your n8n instance. Create the required table structure in your NocoDB instance as detailed above. Configure the credentials for Apify, Anymailfinder, and NocoDB in the respective nodes. In the two NocoDB nodes ("Get all the recorded placeIds" and "Create a row"), select your project and table from the dropdown menus. Activate the workflow. You can now run it by filling out the form in the n8n UI.
by Baptiste Fort
Who is it for? This workflow is perfect for anyone who wants to: Automatically collect contacts from Google Maps**: emails, phone numbers, websites, social media (LinkedIn, Facebook), city, ratings, and reviews. Organize everything neatly in Airtable**, without dealing with messy CSV exports that cause headaches. Send a personalized email to each lead**, without writing it or hitting “send” yourself. 👉 In short, it’s the perfect tool for marketing agencies, freelancers in prospecting, or sales teams tired of endless copy-paste. If you want to automate manual tasks, visit our French agency 0vni – Agence automatisation. How does it work? Here’s the pipeline: Scrape Google Maps with Apify (business name, email, website, phone, LinkedIn, Facebook, city, rating, etc.). Clean and map the data so everything is well-structured (Company, Email, Phone, etc.). Send everything into Airtable to build a clear, filterable database. Trigger an automatic email via Gmail, personalized for each lead. 👉 The result: a real prospecting machine for local businesses. What you need before starting ✅ An Apify account (for Google Maps scraping). ✅ An Airtable account with a prepared base (see structure below). ✅ A Gmail account (to send automatic emails). Airtable Base Structure Your table should contain the following columns: | Company | Email | Phone Number | Website | LinkedIn | Facebook | City | Category | Google Maps Reviews | Google Maps Link | | ------- | ---------------------------------------- | ----------------- | -------------------------------------------- | -------------- | -------------- | ---------------- | ---------------- | ------------------- | ----------------- | | 4 As | contact@4-as.fr | +33 1 89 36 89 00 | https://www.4-as.fr/ | linkedin.com/… | facebook.com/… | 94100 Saint-Maur | training, center | 48 reviews / 5 ★ | maps.google.com/… | Detailed Workflow Steps Step 1 – GO Trigger Node**: Manual Trigger Purpose**: Start the workflow manually. 👉 You can replace this trigger with a Webhook (to launch the flow via a URL) or a Cron (to run it automatically on a schedule). Step 2 – Scrape Google Maps Node**: HTTP Request Method**: POST Where to find the Apify URL? Go to Google Maps Email Leads Fast Scraper Click on API (top right) Open API Endpoints Copy the URL of the 3rd option: Run Actor synchronously and get dataset items 👉 This URL already includes your Apify API token. Body Content Type: JSON Body JSON (example)**: Body Content Type**: JSON Body JSON (example)**: *{ "area_height": 10, "area_width": 10, "emails_only": true, "gmaps_url": "https://www.google.com/maps/search/training+centers+near+Amiens/", "max_results": 200, "search_query": "training center" }* Step 3 – Wait Node**: Wait Purpose**: Give the scraper enough time to return data. Recommended delay*: *10 seconds (adjust if needed). 👉 This ensures that Apify has finished processing before we continue. Step 4 – Mapping Node**: Set Purpose**: Clean and reorganize the raw dataset into structured fields that match Airtable columns. Assignments (example): Company = {{ $json.name }} Email = {{ $json.email }} Phone = {{ $json.phone_number }} Website = {{ $json.website_url }} LinkedIn = {{ $json.linkedin }} Facebook = {{ $json.facebook }} City = {{ $json.city }} Category = {{ $json.google_business_categories }} Google Maps Reviews = {{ $json.reviews_number }} reviews, rating {{ $json.review_score }}/5 Google Maps Link = {{ $json.google_maps_url }} 👉 Result: The data is now clean and ready for Airtable. Step 5 – Airtable Storage Node**: Airtable → Create Record Parameters**: Credential to connect with: Airtable Personal Access Token account Resource: Record Operation: Create Base: Select from list → your base (example: GOOGLE MAPS SCRAPT) Table: Select from list → your table (example: Google maps scrapt) Mapping Column Mode: Map Each Column Manually 👉 To get your Base ID and Table ID, open your Airtable in the browser: https://airtable.com/appA6eMHOoquiTCeO/tblZFszM5ubwwSYDK Here: Base ID = appA6eMHOoquiTCeO Table ID = tblZFszM5ubwwSYDK Authentication Go to: https://airtable.com/create/tokens Create a new Personal Access Token Give it access to the correct base Copy the token into n8n credentials (select Airtable Personal Access Token). Field Mapping (example) Company: {{ $json['Company'] }} Email: {{ $json.Email }} Phone: {{ $json['Phone'] }} Website: {{ $json['Website'] }} LinkedIn: {{ $json.LinkedIn }} Facebook: {{ $json.Facebook }} City: {{ $json.City }} Category: {{ $json['Category'] }} Google Maps Reviews: {{ $json['Google Maps Reviews'] }} Google Maps Link: {{ $json['Google Maps Link'] }} 👉 Result: Each lead scraped from Google Maps is automatically saved into Airtable, ready to be filtered, sorted, or used for outreach. Step 6 – Automatic Email Node**: Gmail → Send Email Parameters**: To: = {{ $json.fields.Email }} Subject: = {{ $json.fields['Company'] }} Message: HTML email with dynamic lead details. Example HTML message: Hello {{ $json.fields['Company'] }} team, I design custom automations for training centers. Goal: zero repetitive manual tasks, from registration to invoicing. Details: {{ $json.fields['Company'] }} in {{ $json.fields.City }} — website: {{ $json.fields['Website'] }} — {{ $json.fields['Google Maps Reviews'] }} Interested in a quick 15-min call to see a live demo? 👉 Result: Each contact receives a fully personalized email with their company name, city, website, and Google Maps rating. Final Result With just one click: Scrape Google Maps (Apify). Clean and structure the data (Set). Save everything into Airtable. Send personalized emails via Gmail. 👉 All without copy-paste, without CSV, and without Excel headaches.
by Ian Kerins
Overview This n8n template automates Walmart product discovery and sends clean results to Google Sheets on a fixed schedule (default: every 4 hours). It uses ScrapeOps Proxy API for resilient page fetches (with JS render + scroll) and ScrapeOps Parser API for structured data extraction (title, price, rating, reviews, image, URL, sponsored flag). The result is a repeatable, low-maintenance workflow for market research, price monitoring, and assortment tracking; ideal for ops and growth teams that need fresh data without babysitting scrapers. Who is this for? E-commerce operators** tracking price & inventory signals Market/competitive analysts** building price baskets and trend views Growth & SEO teams** validating product coverage and SERP facets No-code/low-code builders** who prefer visual pipelines over custom code What problems it solves Reliability:** Offloads JS rendering and scrolling to ScrapeOps to reduce breakage. Structure:** Normalizes fields for analysis-ready rows in Sheets. Scale:** Runs on a timer; no manual downloading or copy-paste. Speed to value:** Simple setup, minimal credentials, immediate output. How it works Schedule triggers every 4 hours. Keyword builds a Walmart search URL. ScrapeOps Proxy API fetches HTML (render + scroll). ScrapeOps Parser API extracts structured product fields. Validate & format rows; drop empties/bad prices. Append to Google Sheets for reporting/dashboards. (Optional) Slack posts a summary with your results link. Set up steps (~5–10 minutes) Google Sheets:* Duplicate the *template** and paste your link in the Google Sheets node. ScrapeOps API:* Get a free key *here* and add it under *Credentials → ScrapeOps API. See **docs. Keyword:* Update the search term in *Set Search Parameters**. (Optional) Configure the Slack node or remove it. Pre-conditions n8n instance running with outbound internet access. Google account with access to the destination Sheet. ScrapeOps account + API key with sufficient quota. Basic familiarity with editing node parameters in n8n. Disclaimer This template uses ScrapeOps as a community node. You are responsible for complying with Walmart’s Terms of Use, robots directives, and applicable laws in your jurisdiction. Scraping targets may change at any time; adjust render/scroll/wait settings and parsers as needed. Use responsibly for legitimate business purposes.
by keisha kalra
Try It Out! This n8n template helps you analyze Google Maps reviews for a list of restaurants, summarize them with AI, and identify optimization opportunities—all in one automated workflow. Whether you're managing multiple locations, helping local restaurants improve their digital presence, or conducting a competitor analysis, this workflow helps you extract insights from dozens of reviews in minutes. How It Works? Start with a pre-filled list of restaurants in Google Sheets. The workflow uses SerpAPI to scrape Google Maps reviews for each listing. Reviews with content are passed to ChatGPT for summarization. Empty or failed reviews are logged in a separate tab for easy follow-up. Results are stored back in your Google Sheet for analysis or sharing How To Use Customize the input list in Google Sheets with your own restaurants. Update the OpenAI prompt if you want a different style of summary. You can trigger this manually or swap in a schedule, webhook, or other event. Requirements A SerpAPI account to fetch reviews An OpenAI account for ChatGPT summarization Access to Google Sheets and n8n Who Is It For? This is helpful for people looking to analyze a large batch of Google reviews in a short amount of time. Additionally, it can be used to compare restaurants and see where each can be optimized. How To Set-Up? Use a SerpAPI endpoint to include in the HTTP request node. Refer to this n8n documentation for more help! https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolserpapi/. Happy Automating!
by David Ashby
Complete MCP server exposing 27 Amazon CloudWatch Application Insights API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Amazon CloudWatch Application Insights credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Amazon CloudWatch Application Insights API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to http://applicationinsights.{region}.amazonaws.com • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (27 total) 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Createapplication (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.CreateApplication: Adds an application that is created from a resource group. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Createcomponent (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.CreateComponent: Creates a custom component by grouping similar standalone instances to monitor. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Createlogpattern (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.CreateLogPattern: Adds an log pattern to a LogPatternSet. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Deleteapplication (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DeleteApplication: Removes the specified application from monitoring. Does not delete the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Deletecomponent (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DeleteComponent: Ungroups a custom component. When you ungroup custom components, all applicable monitors that are set up for the component are removed and the instances revert to their standalone status. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Deletelogpattern (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DeleteLogPattern: Removes the specified log pattern from a LogPatternSet. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describeapplication (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeApplication: Describes the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describecomponent (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeComponent: Describes a component and lists the resources that are grouped together in a component. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describecomponentconfiguration (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeComponentConfiguration: Describes the monitoring configuration of the component. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describecomponentconfigurationrecommendation (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeComponentConfigurationRecommendation: Describes the recommended monitoring configuration of the component. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describelogpattern (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeLogPattern: Describe a specific log pattern from a LogPatternSet. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describeobservation (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeObservation: Describes an anomaly or error with the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describeproblem (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeProblem: Describes an application problem. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describeproblemobservations (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeProblemObservations: Describes the anomalies or errors associated with the problem. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listapplications (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListApplications: Lists the IDs of the applications that you are monitoring. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listcomponents (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListComponents: Lists the auto-grouped, standalone, and custom components of the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listconfigurationhistory (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListConfigurationHistory: Lists the INFO, WARN, and ERROR events for periodic configuration updates performed by Application Insights. Examples of events represented are: INFO: creating a new alarm or updating an alarm threshold. WARN: alarm not created due to insufficient data points used to predict thresholds. ERROR: alarm not created due to permission errors or exceeding quotas. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listlogpatternsets (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListLogPatternSets: Lists the log pattern sets in the specific application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listlogpatterns (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListLogPatterns: Lists the log patterns in the specific log LogPatternSet. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listproblems (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListProblems: Lists the problems with your application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listtagsforresource (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListTagsForResource: Retrieve a list of the tags (keys and values) that are associated with a specified application. A tag is a label that you optionally define and associate with an application. Each tag consists of a required tag key and an optional associated tag value. A tag key is a general label that acts as a category for more specific tag values. A tag value acts as a descriptor within a tag key. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Tagresource (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.TagResource: Add one or more tags (keys and values) to a specified application. A tag is a label that you optionally define and associate with an application. Tags can help you categorize and manage application in different ways, such as by purpose, owner, environment, or other criteria. Each tag consists of a required tag key and an associated tag value, both of which you define. A tag key is a general label that acts as a category for more specific tag values. A tag value acts as a descriptor within a tag key. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Untagresource (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UntagResource: Remove one or more tags (keys and values) from a specified application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Updateapplication (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UpdateApplication: Updates the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Updatecomponent (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UpdateComponent: Updates the custom component name and/or the list of resources that make up the component. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Updatecomponentconfiguration (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UpdateComponentConfiguration: Updates the monitoring configurations for the component. The configuration input parameter is an escaped JSON of the configuration and should match the schema of what is returned by DescribeComponentConfigurationRecommendation. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Updatelogpattern (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UpdateLogPattern: Adds a log pattern to a LogPatternSet. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Amazon CloudWatch Application Insights API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.