by Md Khalid Ali
Overview Turn documents into an AI-powered knowledge base. Upload PDF, CSV, or JSON files and ask natural-language questions about their content using a Retrieval-Augmented Generation (RAG) workflow powered by Google Gemini. The workflow extracts, embeds, and semantically searches document data to generate accurate, source-grounded answers. Designed as a simple and extensible starting point for building AI document assistants. Key Features Upload and analyze PDF, CSV, and JSON AI chatbot with semantic document search Retrieval-Augmented Generation (RAG) architecture Answers grounded in uploaded documents Beginner-friendly workflow with clear documentation Easy to extend for production use How It Works Upload a document via form trigger Content is split into searchable chunks Gemini generates embeddings Data is stored in a vector store The chatbot retrieves context and answers questions Requirements Google Gemini API credentials Notes Uses an in-memory vector store (data resets on restart) Can be replaced with Pinecone, Supabase, Weaviate, or other persistent databases Gemini API usage may incur costs depending on document size and query volume
by WeblineIndia
Zoho CRM - Buyer Persona Matcher This workflow automates the process of identifying buyer personas and generating personalized sales outreach. It triggers via a Zoho CRM Webhook when a deal is updated, enriches the contact data by scraping LinkedIn profiles using Phantombuster and uses OpenAI to match the contact against a defined Persona Database. The final analysis, including a tailored outreach message and confidence score, is synced back to the Zoho CRM contact record. Quick Implementation Steps Set the phantombusterAgentId and personaMatchThreshold in the Workflow Configuration node. Configure your Zoho CRM Webhook to trigger on "Potentials" (Deals) updates. Connect your Phantombuster API credential to the Scrape LinkedIn Profile node. Connect your OpenAI API credential to the OpenAI Chat Model node. Connect your Zoho CRM OAuth2 credential to the CRM nodes. What It Does This workflow transforms raw CRM data into actionable sales intelligence. When a deal is identified, it retrieves the contact's LinkedIn URL and uses an external scraper to gather real-time professional data (title, experience and skills). The AI-driven core compares this live data against a structured database of target personas (e.g., Executive Decision Maker, Technical Evaluator) to: Assign a specific Buyer Persona to the contact. Calculate a Confidence Score (0-1) for the match. Generate a Personalized Outreach Message and specific Talking Points based on the contact's background. The results are automatically written back to the Zoho CRM Contact description, providing sales reps with an immediate strategy for engagement. Who’s It For Sales Development Representatives (SDRs)** wanting to automate the research phase of outbound prospecting. Account Executives (AEs)** looking for personalized talking points tailored to the specific seniority and role of their prospects. Revenue Operations (RevOps)** teams aiming to standardize persona data within the CRM based on objective professional data. Requirements to Use This Workflow A running n8n instance (self-hosted or cloud). Zoho CRM** account with "Potentials" (Deals) and "Contacts" modules. Phantombuster** account and a configured LinkedIn Profile Scraper agent. OpenAI** account (API key) for the persona matching agent. How It Works & How To Set Up Step 1: Configure Trigger and Global Variables Zoho CRM Webhook: Setup a workflow rule in Zoho CRM to trigger this webhook when a deal is updated or created. Workflow Configuration: Open this node and set your parameters: phantombusterAgentId: Your specific Phantombuster scraper ID. personaMatchThreshold: The minimum score (e.g., 0.7) required to accept an AI persona match. Step 2: External Enrichment Setup Extract Contact Data: This node pulls the LinkedIn URL from the Zoho "Next Step" or custom field. Ensure your Zoho deal records contain valid LinkedIn URLs. Scrape LinkedIn Profile: Connect your Phantombuster credentials to allow the workflow to fetch live professional details. Step 3: Define Your Personas Persona Database: Modify the JSON in this node to reflect your company's specific target buyer personas, including their characteristics and typical communication styles. OpenAI Chat Model: Connect your OpenAI API credential. It uses gpt-4o-mini by default for analysis. Step 4: CRM Synchronization Update Zoho CRM Contact: This node pushes the AI's findings (Persona, Style, Talking Points) into the CRM. Ensure the contactId mapping matches your Zoho environment. How To Customize Nodes Expand Persona Profiles Update the Persona Database node to include more niche roles or industry-specific traits to improve the AI's matching accuracy. Adjust Outreach Tone Modify the "System Message" in the Persona Matcher & Outreach Generator to change the tone of the generated messages (e.g., making them more formal or more casual). Custom Field Mapping Change the Update Zoho CRM Contact node to map the persona data into custom CRM fields instead of the default "Description" field for better reporting. Troubleshooting Guide | Issue | Possible Cause | Solution | | :--- | :--- | :--- | | No LinkedIn Data | LinkedIn URL missing or invalid format. | Ensure the Zoho field used for the URL contains a full https://linkedin.com/in/... link. | | Phantombuster Error | Agent ID is incorrect or API credits exhausted. | Verify the phantombusterAgentId in Workflow Configuration and check your Phantombuster dashboard. | | Low Confidence Scores | LinkedIn profile is too sparse or private. | The AI may need more data; ensure the scraper is successfully pulling the "About" and "Experience" sections. | | CRM Update Failing | OAuth2 connection expired. | Re-authenticate your Zoho CRM OAuth2 credentials in n8n. | Need Help? If you need help tailoring the AI persona matching logic, integrating additional data scrapers or mapping these insights into specific CRM dashboards, please reach out to our n8n workflow developers at WeblineIndia. We can help you scale your personalized outreach and increase your conversion rates.
by Pixcels Themes
Who’s it for This template is ideal for recruiters, founders, sales teams, and lead-generation specialists who want to quickly collect LinkedIn profiles based on role, industry, and region. It is perfect for users who want profile lists for outreach, research, hiring, or market analysis without manually searching LinkedIn. What it does / How it works This workflow begins with a web form where you enter three inputs: position, industry, and region. Once the form is submitted, the workflow performs a Google Custom Search query restricted to LinkedIn profile URLs. The results are processed to extract structured profile information such as: Name Job title (cleaned using custom logic) LinkedIn profile link Description / bio snippet Profile image URL The workflow automatically handles pagination by detecting whether more results are available and continues fetching until the limit is reached. All extracted profiles are appended or updated in a Google Sheet so you always maintain an organized and deduplicated list. Requirements Google Sheets OAuth2 credentials Google Custom Search API key Google CSE (Custom Search Engine) ID A Google Sheet with the required columns (name, title, profile link, description, image link, searched position, searched industry, searched region) How to set up Connect your Google Sheets credentials. Add your Custom Search API key and CSE ID inside the HTTP Request node. Select your target Google Sheet in the “Append or update row in sheet” node. Open the form URL and submit your position, industry, and region. Run the workflow to begin scraping profiles. How to customize the workflow Modify search query structure for niche industries Add enrichment tools (Hunter.io, Clearbit, People Data) Expand pagination limit beyond the default Add filters to remove non-relevant results Output data to CRM tools like HubSpot, Notion, Airtable, or Sheets
by Ian Kerins
Overview This n8n template automates scraping Redfin property listings on a schedule. Using ScrapeOps Proxy API for reliable page fetching and the ScrapeOps Redfin Parser API for structured data extraction, it saves clean listing rows to Google Sheets and sends an optional Slack summary. Who is this for? Real estate investors monitoring listings in target markets Agents and brokers tracking new properties across cities or ZIP codes Analysts building property datasets without manual data entry Anyone who wants automated, scheduled Redfin data in a spreadsheet What problem does it solve? Manually checking Redfin for new listings is slow and inconsistent. This workflow runs on a schedule, scrapes your target search page, parses and filters valid listings, and keeps your Google Sheet updated automatically; no browser or manual copy-paste needed. How it works A schedule triggers the workflow every 6 hours. ScrapeOps Proxy fetches the Redfin search page with JS rendering and residential proxy support. ScrapeOps Parser API extracts clean structured JSON from the HTML. Search metadata (total listings, region, price range) is lifted and stored. The results array is split into one item per property. Property fields are normalized: address, price, beds, baths, sqft, status, and more. Invalid listings (missing address or price = 0) are filtered out. Valid listings are appended to Google Sheets. An optional Slack message posts a summary with listing count and sheet link. Set up steps (~10–15 minutes) Register for a free ScrapeOps API key: https://scrapeops.io/app/register/n8n Add ScrapeOps credentials to both ScrapeOps nodes. Docs: https://scrapeops.io/docs/n8n/overview/ Duplicate the Google Sheet template and paste your Sheet ID into Save Listings to Google Sheets. In Set Search Parameters, update redfin_url to your target city or ZIP search page. Optional: open Send Slack Summary, select your Slack credential, and set your channel. Run once manually to confirm results, then activate. Pre-conditions Active ScrapeOps account (free tier available): https://scrapeops.io/app/register/n8n ScrapeOps community node installed in n8n: https://scrapeops.io/docs/n8n/overview/ Google Sheets credentials configured in n8n A duplicated Google Sheet with correct column headers matching the formatter output Optional: Slack credentials for the summary notification node Disclaimer This template uses ScrapeOps as a community node. You are responsible for complying with Redfin's Terms of Use, robots.txt directives, and applicable laws in your jurisdiction. Scraping targets may change at any time; adjust render, scroll, and wait settings and parsers as needed. Use responsibly and only for legitimate business purposes.
by Alexandra Spalato
Scrape Google Maps leads and find emails with Apify and Anymailfinder Short Description This workflow automates lead generation by scraping business data from Google Maps using Apify, enriching it with verified email addresses via Anymailfinder, and storing the results in a NocoDB database. It's designed to prevent duplicates by checking against existing records before saving new leads. Key Features Automated Scraping**: Kicks off a Google Maps search based on your query, city, and country. Email Enrichment**: For businesses with a website, it automatically finds professional email addresses. Data Cleaning**: Cleans website URLs to extract the root domain, ignoring social media links. Duplicate Prevention**: Checks against existing entries in NocoDB using the Google placeId to avoid adding the same lead twice. Structured Storage**: Saves enriched lead data into a structured NocoDB database. Batch Processing**: Efficiently handles and loops through all scraped results. Who This Workflow Is For Sales Teams** looking for a source of local business leads. Marketing Agencies** building outreach campaigns for local clients. Business Developers** prospecting for new partnerships. Freelancers** seeking clients in specific geographical areas. How It Works Trigger: The workflow starts when you submit the initial form with a business type (e.g., "plumber"), a city, a country code, and the number of results you want. Scrape Google Maps: It sends the query to Apify to scrape Google Maps for matching businesses. Process Leads: The workflow loops through each result one by one. Clean Data: It extracts the main website domain from the URL provided by Google Maps. Check for Duplicates: It queries your NocoDB database to see if the business (placeId) has already been saved. If so, it skips to the next lead. Find Emails: If a valid website domain exists, it uses Anymailfinder to find associated email addresses. Store Lead: The final data, including the business name, address, phone, website, and any found emails, is saved as a new row in your NocoDB table. Setup Requirements Required Credentials Apify API Key**: To use the Google Maps scraping actor. Anymailfinder API Key**: For email lookup. NocoDB API Token**: To connect to your database for storing and checking leads. Database Structure You need to create a table in your NocoDB instance with the following columns. The names should match exactly. Table: leads (or your preferred name) title (SingleLineText) website (Url) phone (PhoneNumber) email (Email) email_validation (SingleLineText) address (LongText) neighborhood (SingleLineText) rating (Number) categories (LongText) city (SingleLineText) country (SingleLineText) postal code (SingleLineText) domain (Url) placeId (SingleLineText) - Important for duplicate checking date (Date) Customization Options Change Trigger**: Replace the manual Form Trigger with a Schedule Trigger to run searches automatically or an HTTP Request node to start it from another application. Modify Scraper Parameters**: In the "Scrape Google Maps" node, you can adjust the Apify actor's JSON input to change language, include reviews, or customize other advanced settings. Use a Different Database**: Replace the NocoDB nodes with nodes for Google Sheets, Baserow, Airtable, or any SQL database to store your leads. Installation Instructions Import the workflow into your n8n instance. Create the required table structure in your NocoDB instance as detailed above. Configure the credentials for Apify, Anymailfinder, and NocoDB in the respective nodes. In the two NocoDB nodes ("Get all the recorded placeIds" and "Create a row"), select your project and table from the dropdown menus. Activate the workflow. You can now run it by filling out the form in the n8n UI.
by Rahul Joshi
📊 Description Automate your HR onboarding process by transforming complex policy PDFs into friendly, structured onboarding videos using AI and HeyGen. 📄🎬 This workflow receives HR policy documents via webhook, extracts and simplifies the content with GPT-based AI, generates a natural script for a HeyGen avatar, renders the onboarding video, checks its status until completion, and finally uploads the finished video to Google Drive. Perfect for HR teams who want scalable, consistent, and engaging onboarding experiences without manual video production. ✨👥 🔁 What This Template Does 1️⃣ Receives an HR policy PDF through a webhook for processing. 🌐 2️⃣ Downloads the PDF and extracts readable text from it. 📄 3️⃣ Uses AI to simplify policy language into structured onboarding guidance. 🤖 4️⃣ Converts structured guidance into a friendly onboarding video script. 🗣️ 5️⃣ Sends the script to HeyGen to generate a video with avatar narration. 🎥 6️⃣ Repeatedly checks the HeyGen API until the video is complete. ⏳ 7️⃣ Downloads the completed video automatically. 📥 8️⃣ Uploads the final onboarding MP4 file into Google Drive. ☁️ 9️⃣ Returns the video file via webhook for further automation or client-side display. 🔁 ⭐ Key Benefits ✅ Converts dense HR documents into engaging onboarding videos ✅ Ensures consistency across all onboarding materials ✅ Reduces manual video scripting and editing workload ✅ Provides warm, friendly, employee-ready onboarding guidance ✅ Fully automated pipeline from PDF → AI script → HeyGen video → Drive ✅ Ideal for remote, hybrid, or fast-scaling HR teams 🧩 Features PDF ingestion via secure webhook Text extraction for accurate AI processing Two-stage AI workflow: policy simplification + script creation Structured JSON enforcement for reliable outputs HeyGen video generation with avatar narration Automated status polling loop Google Drive upload with dynamic file naming End-to-end error handling Webhook response with video delivery 🔐 Requirements Google Drive OAuth2 credentials HeyGen API key OpenAI API key (GPT-4.1-mini or GPT-4o required) Webhook endpoint for PDF uploads Valid avatar ID + voice ID for HeyGen 🎯 Target Audience HR teams onboarding new employees L&D (Learning & Development) teams Companies that want to modernize policy training Fast-growing startups needing scalable onboarding content Agencies creating onboarding videos for clients
by Baptiste Fort
Who is it for? This workflow is perfect for anyone who wants to: Automatically collect contacts from Google Maps**: emails, phone numbers, websites, social media (LinkedIn, Facebook), city, ratings, and reviews. Organize everything neatly in Airtable**, without dealing with messy CSV exports that cause headaches. Send a personalized email to each lead**, without writing it or hitting “send” yourself. 👉 In short, it’s the perfect tool for marketing agencies, freelancers in prospecting, or sales teams tired of endless copy-paste. If you want to automate manual tasks, visit our French agency 0vni – Agence automatisation. How does it work? Here’s the pipeline: Scrape Google Maps with Apify (business name, email, website, phone, LinkedIn, Facebook, city, rating, etc.). Clean and map the data so everything is well-structured (Company, Email, Phone, etc.). Send everything into Airtable to build a clear, filterable database. Trigger an automatic email via Gmail, personalized for each lead. 👉 The result: a real prospecting machine for local businesses. What you need before starting ✅ An Apify account (for Google Maps scraping). ✅ An Airtable account with a prepared base (see structure below). ✅ A Gmail account (to send automatic emails). Airtable Base Structure Your table should contain the following columns: | Company | Email | Phone Number | Website | LinkedIn | Facebook | City | Category | Google Maps Reviews | Google Maps Link | | ------- | ---------------------------------------- | ----------------- | -------------------------------------------- | -------------- | -------------- | ---------------- | ---------------- | ------------------- | ----------------- | | 4 As | contact@4-as.fr | +33 1 89 36 89 00 | https://www.4-as.fr/ | linkedin.com/… | facebook.com/… | 94100 Saint-Maur | training, center | 48 reviews / 5 ★ | maps.google.com/… | Detailed Workflow Steps Step 1 – GO Trigger Node**: Manual Trigger Purpose**: Start the workflow manually. 👉 You can replace this trigger with a Webhook (to launch the flow via a URL) or a Cron (to run it automatically on a schedule). Step 2 – Scrape Google Maps Node**: HTTP Request Method**: POST Where to find the Apify URL? Go to Google Maps Email Leads Fast Scraper Click on API (top right) Open API Endpoints Copy the URL of the 3rd option: Run Actor synchronously and get dataset items 👉 This URL already includes your Apify API token. Body Content Type: JSON Body JSON (example)**: Body Content Type**: JSON Body JSON (example)**: *{ "area_height": 10, "area_width": 10, "emails_only": true, "gmaps_url": "https://www.google.com/maps/search/training+centers+near+Amiens/", "max_results": 200, "search_query": "training center" }* Step 3 – Wait Node**: Wait Purpose**: Give the scraper enough time to return data. Recommended delay*: *10 seconds (adjust if needed). 👉 This ensures that Apify has finished processing before we continue. Step 4 – Mapping Node**: Set Purpose**: Clean and reorganize the raw dataset into structured fields that match Airtable columns. Assignments (example): Company = {{ $json.name }} Email = {{ $json.email }} Phone = {{ $json.phone_number }} Website = {{ $json.website_url }} LinkedIn = {{ $json.linkedin }} Facebook = {{ $json.facebook }} City = {{ $json.city }} Category = {{ $json.google_business_categories }} Google Maps Reviews = {{ $json.reviews_number }} reviews, rating {{ $json.review_score }}/5 Google Maps Link = {{ $json.google_maps_url }} 👉 Result: The data is now clean and ready for Airtable. Step 5 – Airtable Storage Node**: Airtable → Create Record Parameters**: Credential to connect with: Airtable Personal Access Token account Resource: Record Operation: Create Base: Select from list → your base (example: GOOGLE MAPS SCRAPT) Table: Select from list → your table (example: Google maps scrapt) Mapping Column Mode: Map Each Column Manually 👉 To get your Base ID and Table ID, open your Airtable in the browser: https://airtable.com/appA6eMHOoquiTCeO/tblZFszM5ubwwSYDK Here: Base ID = appA6eMHOoquiTCeO Table ID = tblZFszM5ubwwSYDK Authentication Go to: https://airtable.com/create/tokens Create a new Personal Access Token Give it access to the correct base Copy the token into n8n credentials (select Airtable Personal Access Token). Field Mapping (example) Company: {{ $json['Company'] }} Email: {{ $json.Email }} Phone: {{ $json['Phone'] }} Website: {{ $json['Website'] }} LinkedIn: {{ $json.LinkedIn }} Facebook: {{ $json.Facebook }} City: {{ $json.City }} Category: {{ $json['Category'] }} Google Maps Reviews: {{ $json['Google Maps Reviews'] }} Google Maps Link: {{ $json['Google Maps Link'] }} 👉 Result: Each lead scraped from Google Maps is automatically saved into Airtable, ready to be filtered, sorted, or used for outreach. Step 6 – Automatic Email Node**: Gmail → Send Email Parameters**: To: = {{ $json.fields.Email }} Subject: = {{ $json.fields['Company'] }} Message: HTML email with dynamic lead details. Example HTML message: Hello {{ $json.fields['Company'] }} team, I design custom automations for training centers. Goal: zero repetitive manual tasks, from registration to invoicing. Details: {{ $json.fields['Company'] }} in {{ $json.fields.City }} — website: {{ $json.fields['Website'] }} — {{ $json.fields['Google Maps Reviews'] }} Interested in a quick 15-min call to see a live demo? 👉 Result: Each contact receives a fully personalized email with their company name, city, website, and Google Maps rating. Final Result With just one click: Scrape Google Maps (Apify). Clean and structure the data (Set). Save everything into Airtable. Send personalized emails via Gmail. 👉 All without copy-paste, without CSV, and without Excel headaches.
by Ian Kerins
Overview This n8n template automates Walmart product discovery and sends clean results to Google Sheets on a fixed schedule (default: every 4 hours). It uses ScrapeOps Proxy API for resilient page fetches (with JS render + scroll) and ScrapeOps Parser API for structured data extraction (title, price, rating, reviews, image, URL, sponsored flag). The result is a repeatable, low-maintenance workflow for market research, price monitoring, and assortment tracking; ideal for ops and growth teams that need fresh data without babysitting scrapers. Who is this for? E-commerce operators** tracking price & inventory signals Market/competitive analysts** building price baskets and trend views Growth & SEO teams** validating product coverage and SERP facets No-code/low-code builders** who prefer visual pipelines over custom code What problems it solves Reliability:** Offloads JS rendering and scrolling to ScrapeOps to reduce breakage. Structure:** Normalizes fields for analysis-ready rows in Sheets. Scale:** Runs on a timer; no manual downloading or copy-paste. Speed to value:** Simple setup, minimal credentials, immediate output. How it works Schedule triggers every 4 hours. Keyword builds a Walmart search URL. ScrapeOps Proxy API fetches HTML (render + scroll). ScrapeOps Parser API extracts structured product fields. Validate & format rows; drop empties/bad prices. Append to Google Sheets for reporting/dashboards. (Optional) Slack posts a summary with your results link. Set up steps (~5–10 minutes) Google Sheets:* Duplicate the *template** and paste your link in the Google Sheets node. ScrapeOps API:* Get a free key *here* and add it under *Credentials → ScrapeOps API. See **docs. Keyword:* Update the search term in *Set Search Parameters**. (Optional) Configure the Slack node or remove it. Pre-conditions n8n instance running with outbound internet access. Google account with access to the destination Sheet. ScrapeOps account + API key with sufficient quota. Basic familiarity with editing node parameters in n8n. Disclaimer This template uses ScrapeOps as a community node. You are responsible for complying with Walmart’s Terms of Use, robots directives, and applicable laws in your jurisdiction. Scraping targets may change at any time; adjust render/scroll/wait settings and parsers as needed. Use responsibly for legitimate business purposes.
by keisha kalra
Try It Out! This n8n template helps you analyze Google Maps reviews for a list of restaurants, summarize them with AI, and identify optimization opportunities—all in one automated workflow. Whether you're managing multiple locations, helping local restaurants improve their digital presence, or conducting a competitor analysis, this workflow helps you extract insights from dozens of reviews in minutes. How It Works? Start with a pre-filled list of restaurants in Google Sheets. The workflow uses SerpAPI to scrape Google Maps reviews for each listing. Reviews with content are passed to ChatGPT for summarization. Empty or failed reviews are logged in a separate tab for easy follow-up. Results are stored back in your Google Sheet for analysis or sharing How To Use Customize the input list in Google Sheets with your own restaurants. Update the OpenAI prompt if you want a different style of summary. You can trigger this manually or swap in a schedule, webhook, or other event. Requirements A SerpAPI account to fetch reviews An OpenAI account for ChatGPT summarization Access to Google Sheets and n8n Who Is It For? This is helpful for people looking to analyze a large batch of Google reviews in a short amount of time. Additionally, it can be used to compare restaurants and see where each can be optimized. How To Set-Up? Use a SerpAPI endpoint to include in the HTTP request node. Refer to this n8n documentation for more help! https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolserpapi/. Happy Automating!
by David Ashby
Complete MCP server exposing 27 Amazon CloudWatch Application Insights API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Amazon CloudWatch Application Insights credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Amazon CloudWatch Application Insights API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to http://applicationinsights.{region}.amazonaws.com • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (27 total) 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Createapplication (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.CreateApplication: Adds an application that is created from a resource group. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Createcomponent (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.CreateComponent: Creates a custom component by grouping similar standalone instances to monitor. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Createlogpattern (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.CreateLogPattern: Adds an log pattern to a LogPatternSet. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Deleteapplication (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DeleteApplication: Removes the specified application from monitoring. Does not delete the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Deletecomponent (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DeleteComponent: Ungroups a custom component. When you ungroup custom components, all applicable monitors that are set up for the component are removed and the instances revert to their standalone status. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Deletelogpattern (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DeleteLogPattern: Removes the specified log pattern from a LogPatternSet. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describeapplication (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeApplication: Describes the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describecomponent (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeComponent: Describes a component and lists the resources that are grouped together in a component. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describecomponentconfiguration (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeComponentConfiguration: Describes the monitoring configuration of the component. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describecomponentconfigurationrecommendation (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeComponentConfigurationRecommendation: Describes the recommended monitoring configuration of the component. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describelogpattern (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeLogPattern: Describe a specific log pattern from a LogPatternSet. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describeobservation (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeObservation: Describes an anomaly or error with the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describeproblem (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeProblem: Describes an application problem. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describeproblemobservations (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeProblemObservations: Describes the anomalies or errors associated with the problem. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listapplications (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListApplications: Lists the IDs of the applications that you are monitoring. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listcomponents (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListComponents: Lists the auto-grouped, standalone, and custom components of the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listconfigurationhistory (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListConfigurationHistory: Lists the INFO, WARN, and ERROR events for periodic configuration updates performed by Application Insights. Examples of events represented are: INFO: creating a new alarm or updating an alarm threshold. WARN: alarm not created due to insufficient data points used to predict thresholds. ERROR: alarm not created due to permission errors or exceeding quotas. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listlogpatternsets (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListLogPatternSets: Lists the log pattern sets in the specific application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listlogpatterns (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListLogPatterns: Lists the log patterns in the specific log LogPatternSet. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listproblems (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListProblems: Lists the problems with your application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listtagsforresource (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListTagsForResource: Retrieve a list of the tags (keys and values) that are associated with a specified application. A tag is a label that you optionally define and associate with an application. Each tag consists of a required tag key and an optional associated tag value. A tag key is a general label that acts as a category for more specific tag values. A tag value acts as a descriptor within a tag key. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Tagresource (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.TagResource: Add one or more tags (keys and values) to a specified application. A tag is a label that you optionally define and associate with an application. Tags can help you categorize and manage application in different ways, such as by purpose, owner, environment, or other criteria. Each tag consists of a required tag key and an associated tag value, both of which you define. A tag key is a general label that acts as a category for more specific tag values. A tag value acts as a descriptor within a tag key. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Untagresource (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UntagResource: Remove one or more tags (keys and values) from a specified application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Updateapplication (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UpdateApplication: Updates the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Updatecomponent (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UpdateComponent: Updates the custom component name and/or the list of resources that make up the component. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Updatecomponentconfiguration (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UpdateComponentConfiguration: Updates the monitoring configurations for the component. The configuration input parameter is an escaped JSON of the configuration and should match the schema of what is returned by DescribeComponentConfigurationRecommendation. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Updatelogpattern (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UpdateLogPattern: Adds a log pattern to a LogPatternSet. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Amazon CloudWatch Application Insights API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Ramdoni
🚀 Convert PDF to MCQ Question Bank in Excel with AI (Gemini) Convert any PDF into a structured Multiple Choice Question (MCQ) bank with answer keys and explanations — fully automated using n8n and Google Gemini. 🎯 Use Case Teachers & Educators Trainers & Course Creators HR & Corporate Training Teams EdTech Builders ⚙️ Features Webhook-based PDF upload File validation (max 5MB + file existence check) PDF text extraction Automatic text chunking AI-generated MCQs using Google Gemini Clean JSON parsing Export to Excel (.xlsx) Upload to Google Drive Send results via Email & Telegram 🔄 Workflow Steps Webhook Trigger Validate File Conditional Check (Success / Failed) Extract Text from PDF Chunk Text Generate MCQs (Google Gemini) Clean JSON Output Convert to Excel Upload to Google Drive Send via Telegram Send via Email 🔌 Requirements n8n (self-hosted or cloud) Google Gemini API Key Google Drive credentials Email (SMTP / Gmail) Telegram Bot Token 📥 How to Use Send POST request to webhook with form-data: file → PDF file email → (optional) telegram_chat_id → (optional) 📤 Output Excel file with MCQs Stored in Google Drive Delivered via Email / Telegram ⚠️ Notes Max file size: 5MB Works best with text-based PDFs AI output quality depends on input clarity 💡 Ideas for Improvement Difficulty level classification Multi-language support LMS export (Moodle, Google Forms) Bulk PDF processing 💰 Commercial Use This workflow can be used for: Internal automation Client services Digital product monetization ❤️ Support If you find this useful: Upvote on n8n Share with others
by Ian Kerins
Overview This n8n template automates a weekly Reddit industry digest without the Reddit API. It scrapes top posts from selected subreddits via ScrapeOps Proxy, enriches them with full post text, deduplicates against Google Sheets, and generates a weekly summary - optionally emailed to your inbox. Who is this for? Developers and product teams monitoring industry trends on Reddit Marketers and founders tracking niche community conversations Analysts building automated weekly briefings from Reddit What problem does it solve? Manually checking multiple subreddits weekly is time-consuming. This workflow runs automatically, scrapes top posts, removes duplicates, and delivers a clean weekly digest to Google Sheets and optionally your email. How it works A weekly schedule triggers the workflow automatically. ScrapeOps Proxy scrapes "Top of Week" from each subreddit on old.reddit.com. Post metadata is parsed from HTML: title, URL, score, comments, author, timestamps. Each post is fetched as JSON to extract the full selftext body. Data is merged, normalized, and deduplicated against existing Sheet rows. New posts are appended to the posts tab. A weekly digest is written to weekly_digest and optionally emailed. Set up steps (~10–15 minutes) Register for a free ScrapeOps API key: https://scrapeops.io/app/register/n8n Add ScrapeOps credentials in n8n. Docs: https://scrapeops.io/docs/n8n/overview/ Duplicate this sheet to copy Columns and Spreadsheet ID. Connect Google Sheets and set your Spreadsheet ID in the Read, Append, and Digest nodes. Update your subreddit list and week range in Configure Subreddits & Week Range. Optional: configure the Send Email node with sender and recipient credentials. Run once manually to confirm, then activate. Pre-conditions Active ScrapeOps account (free tier). ScrapeOps community node installed in n8n. Google Sheets credentials configured in n8n A Google Sheet with posts and weekly_digest tabs and correct column headers Optional: email credentials for the Send Email node Disclaimer This template uses ScrapeOps as a community node. You are responsible for complying with Reddit's Terms of Use, robots.txt directives, and applicable laws in your jurisdiction. Scraping targets may change at any time; adjust render, scroll, and wait settings and parsers as needed. Use responsibly and only for legitimate business purposes.