by Jack
Automated real estate monitoring with MrScraper Who’s it for Real estate investors, agents, proptech teams, and business intelligence professionals who need continuous property price monitoring with automated data collection and AI-ready outputs for reporting and decision-making. What it does This workflow automatically collects real estate listing data from Realtor, including key fields such as property price, listing title, and location. It is designed to capture the full set of available real estate data for the selected search criteria. The workflow consolidates the scraped data into a single structured dataset, removes inconsistencies, and formats the output into a clean CSV file. This makes it easy to import into spreadsheets, databases, or analytics tools for further analysis, reporting, or automation. How it works Accepts a real estate search URL from the same domain**, allowing the workflow to be reused across different locations and filters. Collects and extracts configurable real estate fields** (such as price, title, location, and other listing details) based on your data requirements. Automatically navigates through all related real estate result pages** to ensure the extracted dataset is complete. Converts the final aggregated dataset into a clean CSV** file for easy use in analysis, reporting, or downstream automation. How to set up 1. Set up your scraper Create two manual scrapers on the MrScraper Platform: One scraper to collect the number of results page One scraper to extract detailed data from each listing URL This separation ensures the workflow can scale and be reused efficiently. 2. Customize extracted data In the data extraction scraper, customize the fields according to your needs (for example: price, title, location, etc.). If you need help automating or configuring the extraction logic, you can contact the MrScraper team for assistance. 3. Configure API credentials MrScraper: Generate your API token from Manual API Access or your profile page. This token allows n8n to trigger and retrieve data from your scrapers. Gmail OAuth2: Required to send the extracted CSV results via email. Google Drive OAuth2 (optional): Used to automatically upload the CSV output to Google Drive for storage and sharing. Requirements MrScraper account to create and manage the scrapers Gmail account to receive the extracted results via email Google Drive account to store the extracted results (optional) How to customize the workflow Change scraper result – Customize the data extraction results in the scraper you have created. Output file – Can be exported to CSV, JSON, XLSX, or other formats. Integrate with other programs – Add nodes that connect with the desired programs.
by Anas Chahid Ksabi
How it works This workflow runs every Monday at 8 AM and automatically monitors your Jira project, measures progress against the active sprint, and delivers a structured report to stakeholders — with zero manual effort. Fetches updated, created, completed, and due-soon issues from Jira for the past 7 days Aggregates KPIs: velocity, blocked issues, epic progress, team workload per assignee... Builds a styled HTML email + a PDF report attachment Build a Markdown file Sends everything via Gmail to your configured recipients Stores all metrics in PostgreSQL for historical tracking and data visualization Set up steps Setup takes around 10–15 minutes. Connect your Jira, Gmail, and PostgreSQL credentials in n8n Open the CONFIGURATION NODE node and set your PROJECT_KEY, EMAIL_TO, and JIRA_BASE_URL On first run, execute the CREATE TABLES node once to initialize the database schema Activate the workflow — it will trigger automatically every Monday at 8AM For PDF generation, you need to install a community node * PDFMunk*, it's a verified node by n8n.
by Rajeet Nair
Overview This workflow automates the resume screening process using AI, enabling faster and more consistent candidate evaluation. It analyzes uploaded resumes, scores candidates based on job fit, and automatically routes them into acceptance, rejection, or manual review flows. By combining AI scoring, decision logic, and automated communication, this workflow helps HR teams save time, reduce bias, and streamline hiring operations. How It Works Webhook Trigger Receives resume uploads along with candidate details. Workflow Configuration Defines: Target job role Acceptance threshold Borderline threshold Resume Extraction Extracts text from uploaded PDF resumes. Data Preparation Structures candidate name, email, and resume content. AI Resume Scoring AI evaluates candidate based on: Skills match (40%) Experience (35%) Education (15%) Overall fit (10%) Returns: Score (0–100) Decision (accept / reject / borderline) Reason Decision Routing Accept → proceed to acceptance flow Borderline → escalate for review Reject → send rejection Action Flows Accepted Candidates Send acceptance email Log to Google Sheets Rejected Candidates Send rejection email Log to Google Sheets Borderline Candidates Notify HR via Slack Log as escalated in Google Sheets Setup Instructions Webhook Setup Configure the webhook endpoint (resume-upload) Connect it to your application or form OpenAI Credentials Add API credentials for resume scoring Gmail Integration Connect Gmail account for sending emails Slack Setup Configure Slack credentials Set HR review channel Google Sheets Connect your Google account Create a sheet for candidate tracking Customize Parameters Set job role Adjust: Acceptance threshold (e.g., 75) Borderline threshold (e.g., 50) Use Cases Automated resume screening for HR teams High-volume hiring pipelines Startup hiring automation Candidate pre-filtering before interviews AI-assisted recruitment workflows Requirements OpenAI API key Gmail account Slack workspace (optional) Google Sheets account n8n instance Key Features AI-powered resume evaluation Structured scoring and decision output Automated candidate communication Slack-based human review escalation Centralized tracking in Google Sheets Configurable thresholds and job roles Summary A complete AI-driven hiring assistant that evaluates resumes, scores candidates, and automates decision-making workflows. It reduces manual screening effort while ensuring consistent and scalable recruitment processes.
by Cheng Siong Chin
How It Works This workflow automates end-to-end carbon emissions monitoring, strategy optimisation, and ESG reporting using a multi-agent AI supervisor architecture in n8n. Designed for sustainability managers, ESG teams, and operations leads, it eliminates the manual effort of tracking emissions, evaluating reduction strategies, and producing compliance reports. Data enters via scheduled pulls and real-time webhooks, then merges into a unified feed processed by a Carbon Supervisor Agent. Sub-agents handle monitoring, optimisation, policy enforcement, and ESG reporting. Approved strategies are auto-executed or routed for human sign-off. Outputs are consolidated and pushed to Slack, Google Sheets, and email, keeping all stakeholders informed. The workflow closes the loop from raw sensor data to actionable ESG dashboards with minimal human intervention. Setup Steps Connect scheduled trigger and webhook nodes to your emissions data sources. Add credentials for Slack (bot token), Gmail (OAuth2), and Google Sheets (service account). Configure the Carbon Supervisor Agent with your preferred LLM (OpenAI or compatible). Set approval thresholds in the Check Approval Required node. Map Google Sheets document ID for ESG report and KPI dashboard nodes. Prerequisites OpenAI or compatible LLM API key Slack bot token Gmail OAuth2 credentials Google Sheets service account Use Cases Corporate sustainability teams automating monthly ESG reporting Customisation Swap LLM models per agent for cost or accuracy trade-offs Benefits Eliminates manual emissions data aggregation and report generation
by BHSoft
Customer Visit Notification This workflow monitors Google Calendar for events indicating that a customer will visit the company today or the next day, retrieves the required details, and sends reminder notifications to the relevant stakeholders. It also posts a company-wide announcement to ensure proper preparation and a professional reception for the customer. 📌Who is this for? Reception / Administration team Sales / Account owners in charge of customers Management / Related team leaders Security / IT / Logistics (for meeting room, equipment, and check-in preparation) 📌The problem Customer visit information is usually shared manually and can be easily missed. Related staff are not informed in time to prepare. This causes last-minute preparation for reception, meeting rooms, documents, and support. It affects the customer experience and the company’s professional image. 📌How it works When a customer meeting is scheduled, the system records the information (time, customer name, company, person in charge). The system automatically sends notifications to related groups based on timeline: Notify the whole office 1 hour before the visit. Notify related members 24 hours in advance. Notifications can be sent via Email / Slack / Internal chat. 📌Quick setup Required information: N8n Version 2.4.6 Google Calendar OAuth2 API: Client ID, Client Secret Google Sheets OAuth2 API: Client ID, Client Secret Slack App: Bot User OAuth Token Google Sheets will be used to log all notified events. 📌Results Everyone is aware of the customer visit schedule. Teams can proactively prepare meeting rooms, documents, and manpower. Reduce mistakes and missed communication. Improve customer experience and company professionalism. 📌Take it further Customer check-in using QR code / Visitor form Send reminders to prepare documents Store visit history Monthly/quarterly reports for number of customer visits 📌Need help customizing? Contact me for consulting and support: Linkedin / Website
by Rahul Joshi
📊 Description Stop finding out you're out of stock after a customer already tried to buy. This workflow monitors your entire product inventory daily, calculates how fast each SKU is selling, and automatically raises purchase orders to your supplier before you hit zero — all without you opening a spreadsheet. Built for D2C brands and ops teams who are tired of manual stock checks, surprise stockouts, and dead inventory eating up cash. The AI layer adds demand forecasting and tells you exactly what to bundle, discount, or kill every week. What This Workflow Does ⏰ Triggers every morning at 8AM to fetch fresh product and stock data automatically 🧮 Calculates sales velocity per SKU using a 7-day rolling average from your Sales Log 🚦 Flags every SKU as 🔴 Stockout Risk, 🟡 Dead Stock, or 🟢 Healthy and updates Google Sheets 📧 Sends instant email alerts when a SKU is about to run out or has been sitting unsold 📋 Automatically calculates reorder quantity using lead time and buffer stock formula 🏭 Emails a formatted Purchase Order directly to your supplier — no manual drafting 🤖 Sends all dead/slow SKUs to GPT-4o every Sunday for actionable recommendations — bundle, discount, or kill 📊 Delivers a 30-day demand forecast every Sunday based on 90 days of sales history and seasonal context Key Benefits ✅ Never get caught off guard by a stockout again ✅ POs go to suppliers automatically — zero manual work ✅ AI tells you what to do with slow-moving inventory every week ✅ 30-day demand forecast keeps you buying ahead, not reacting ✅ Everything logged in Google Sheets — full visibility at all times ✅ Works with any product catalog — just plug in your Sheet How It Works The workflow runs in 5 stages, each timed 5 minutes apart every morning so data flows cleanly from one stage to the next. Stage 1 — Data Sync (8:00 AM daily) Pulls product data from DummyJSON API (swap with your Shopify or WooCommerce endpoint when going live), simulates daily sales per SKU, and writes everything into your Google Sheets inventory log. Stage 2 — Stock Health Check (8:05 AM daily) Reads the last 7 days of sales, calculates average daily velocity per SKU, and assigns a flag. Anything running out in under 14 days gets flagged 🔴. Anything with zero sales in 7 days gets flagged 🟡. Everything else is 🟢. Flags are written back to your sheet and email alerts fire immediately for anything critical. Stage 3 — Auto Reorder Engine (8:10 AM daily) Picks up every 🔴 SKU and runs it through a reorder formula — (avg daily sales × lead time) + 7-day buffer stock. A properly formatted Purchase Order email goes straight to your supplier. Every PO is logged in the PO Tracker sheet with status, quantity, and date. Stage 4 — Dead Inventory AI Digest (Every Sunday 9:00 AM) All 🟡 SKUs get sent to GPT-4o with your stock levels and sales data. The AI comes back with a recommendation for each one — bundle it, run a discount (with exact percentage), or kill the SKU entirely. You get a clean email grouped by urgency: high, medium, low. Stage 5 — 30-Day Demand Forecast (Every Sunday 9:30 AM) Aggregates the last 90 days of sales per SKU and sends it to GPT-4o with the current month for seasonality context. GPT predicts how many units you'll need in the next 30 days, calculates stock gap, and tells you what to reorder now, stock up on, or reduce. Delivered as a clean email report every Sunday morning. Features Cron-based daily and weekly automation triggers Sales velocity calculation with 7-day and 90-day rolling windows 3-tier SKU health flagging system (🔴 🟡 🟢) Reorder quantity formula with lead time + buffer stock logic Automated PO generation and supplier email dispatch GPT-4o dead inventory strategist with discount recommendations GPT-4o 30-day demand forecasting with seasonality awareness Full Google Sheets logging across Inventory Master, Sales Log, and PO Tracker Gmail-based alerting and weekly digest reports Modular 5-stage architecture — easy to swap data sources Requirements OpenAI API key (GPT-4o access) Google Sheets OAuth2 connection Gmail OAuth2 connection A configured Google Sheet with 3 sheets: Inventory Master, Sales Log, PO Tracker DummyJSON API — free, no key needed (replace with Shopify/WooCommerce for production) Setup Steps Copy the Google Sheet template and grab your Sheet ID from the URL Paste the Sheet ID into all Google Sheets nodes (there are 8 of them) Connect your Google Sheets OAuth2 credentials Connect your Gmail OAuth2 credentials Add your OpenAI API key to both GPT-4o nodes Replace the email address in all Gmail nodes with your own Activate all triggers — the system runs itself from here Target Audience 🛒 D2C brand owners managing inventory without a dedicated ops team 📦 E-commerce operators tired of manual stock checks and surprise stockouts 💼 Operations managers who want a real-time view of inventory health 🤖 Automation agencies building supply chain solutions for e-commerce clients
by Avkash Kakdiya
How it works This workflow monitors new orders from a Postgres database and sends a confirmation email instantly. It then waits until the expected delivery time and continuously checks the delivery status. Once delivered, it uses AI to generate product usage tips and emails them to the customer. After two weeks, it sends personalized complementary product recommendations to drive repeat purchases. Step-by-step Trigger new orders from database** Schedule Trigger – Runs every 2 minutes to check for new orders. Execute a SQL query – Fetches recently created orders from Postgres. Order Placed Ack. – Sends an order confirmation email via Gmail. Track delivery status dynamically** Wait until product get deliver – Pauses workflow until estimated delivery time. Select rows from a table – Retrieves latest order status. If – Checks whether the order is delivered. Wait for a day – Rechecks daily until delivery is confirmed. Send AI-powered product usage tips** Get Product Usage Tips – Uses AI agent to generate helpful tips. Groq Chat Model – Provides LLM capability for content generation. Format AI response – Converts AI output into clean HTML list format. Send Tips to User – Emails tips to the customer. Upsell complementary products after delay** Wait for 2 weeks – Delays follow-up communication. Get Complementary Products – Generates related product suggestions. Groq Chat Model1 – Powers recommendation generation. Code in JavaScript – Formats recommendations into HTML. Send Tips to User1 – Sends upsell email with recommendations. Why use this? Automates full post-purchase lifecycle without manual intervention Improves customer experience with timely and helpful communication Increases repeat purchases through personalized upsell emails Reduces support queries by proactively sending usage guidance Scales easily for growing eCommerce operations
by Mushood Hanif
How can you find your target market if you don't know what your product is. This simple philosophy changes the way we think about automated sales agents. Context changes everything. In this 4-part workflow, we start by creating a knowledge base that will act as context across the workflow. This context will guide and provide our AI Agents across the workflow to locate better leads and perform market research based on what the product actually offers. Use Case: Lead generation for Product-based Sales Tech Required Neon DB**: For storing Research and Lead Data. You can use Google sheets but it has a rate limiting problem. Google Serper**: As a web search tool for our AI. Google Drive**: For storing our knowledge base documents. Pinecone**: Vector DB for converting our knowledge base into context for AI. Hunter.io**: For finding emails for outreach. Email Client**: An email client, maybe gmail or anything that can send an email on your behalf. Gemini**: Our trusty AI LLM. Good to know All of the tools that I use in this workflow are either free or have an extremely generous free-tier. How it works We start by converting our knowledge base into context for AI. Take in the documents from Google drive and convert it into embeddings and store them in a vector store like Pinecone. This needs to be only run once, or whenever you have a new document in your knowledge base. Then we pass this context to an AI agent and tell it to generate search queries for locating companies that actually need my services. Then for each company that we've located, we determine the company staff that we need to reach out to for selling our product. This will be done by a combination of Google Serper and Hunter.io Once we have the list of employees and their emails, we start creating personalized emails based on the data we've collected for each of the employee and send them outreach emails.
by Pratyush Kumar Jha
Smart Resume Screener — JD ↔ Resume AI Match & Sheet Logger Smart Resume Screener ingests a candidate resume and a job description link, extracts clean text from both, runs an LLM-powered screening agent to produce a structured assessment (strengths, weaknesses, risk/reward, justification, and a 0–10 fit score), extracts contact details, and appends a single, validated row to a Google Sheet for tracking. How It Works (Step-by-Step) 1. Trigger — On Form Submission Public form webhook sends: Binary resume file (PDF / DOCX) Job Description (JD) URL or text 2. Extract & Fetch Content Resume Extraction node** Converts the uploaded binary resume into plain text (data.resume). HTTP Request node** Fetches the JD HTML/text from the provided link. Job Description Extractor (LLM-driven)** Parses the fetched content into structured JD fields: Requirements Responsibilities Skills Seniority etc. 3. Prepare and Aggregate Set Resume node** Normalizes the resume into a clean JSON object. Merge/Aggregate node** Builds a single payload containing: { "resume": "...", "job_description": "...", "meta": "..." } 4. AI Evaluation Recruiter Agent (LangChain node, powered by Google Gemini)** Receives aggregated payload Returns a strict JSON-formatted screening report including: candidate_strengths candidate_weaknesses risk reward overall_fit_rating (0–10 numeric) justification Structured Output Parser** Enforces JSON schema Ensures predictable downstream data 5. Identity Extraction & Logging Contact Info Extractor** Extracts: Name Email Append to Google Sheets** Writes: Date Name Email Strengths Weaknesses Risk Reward Justification Overall Fit Score 6. (Optional) Notifications / Follow-Ups Add Slack / Email / Webhook nodes Trigger alerts for high-fit candidates Quick Setup Guide 👉 Demo & Setup Video 👉 Sheet Template 👉 Course Nodes of Interest You Can Edit Trigger — On Form Submission Change webhook URL Modify accepted form fields Add metadata capture (job_id, source) Resume Extraction (Extract from File) Enable OCR fallback Adjust encoding/charset handling Replace with third-party resume parser HTTP Request (Fetch Job Description) Configure timeouts Add retry policy Set headers Restrict allowed domains Job Description Extractor (Information Extractor1) Modify extractor prompt/schema Add fields like must_have and nice_to_have Set Resume (Prepare Resume) Strip headers/footers Normalize dates Split resume sections Merge / Aggregate Modify payload structure Add context fields (job_id, recruiter_notes, source_platform) Recruiter Agent (LangChain Agent) Edit system/user prompts Adjust model temperature Modify token limits Switch LLM provider Structured Output Parser Update JSON schema Add fields like: experience_years certifications notice_period Contact Info Extractor Add: Phone LinkedIn Location Append to Google Sheets Modify column mapping Add fields like: workflow_run_id resume_link What You’ll Need (Credentials) Google Sheets API credentials (OAuth or Service Account) Google Drive / Storage credentials (if resumes are stored there) LLM provider credentials (e.g., Google Gemini API key/service account) (Optional) OCR / Vision API credentials for scanned PDFs (Optional) Email / Slack / Teams webhook or SMTP credentials Access to public JD URLs (or credentials if behind authentication) Recommended Settings & Best Practices LLM temperature:** 0.0–0.3 for consistent output Max tokens:** 800–1200 for justification (with enforced limits) Strict JSON schema:** Fail fast on invalid structure Retries & timeouts:** ~10s HTTP timeout 2 retries with exponential backoff Rate limiting:** Protect LLM quotas Deduplication:** Check existing email or resume hash Least privilege:** Scope Google service account to target sheet only PII handling:** Limit exposed fields; encrypt sensitive data if needed Schema versioning:** Add schema_version column Error logging:** Use Catch node with workflow_run_id Human review gate:** Route borderline scores (6–7) for manual review Customization Ideas Conditional alerts (overall_fit_rating >= 8) Multi-model scoring (Gemini + alternative model) Automated outreach emails ATS integration (Greenhouse, Lever, etc.) JD template library Multi-language resume routing Skill-level mapping (e.g., python: 4/5) Candidate scoring dashboard Resume storage with secure links Troubleshooting — Quick Tips Resume Extraction Issues Validate binary input Enable OCR for scanned PDFs Check encoding and file type JD Fetch Failure Validate URL reachability Add headers (User-Agent) Increase timeout Provide auth if needed LLM JSON Errors Lower temperature (0–0.2) Enforce strict JSON prompt Add retry with "fix-json" prompt Inspect raw LLM output Google Sheets Append Fails Check credential expiry Confirm sheet ID and gid Validate column mapping Monitor API quota Duplicate Rows Add email-based dedupe logic Hash resume content PII Exposure Audit sheet sharing settings Use restricted service accounts Tags / Suggested Listing Fields recruiting resume-parser ai-screening langchain google-gemini google-sheets n8n ats-integration pii-sensitive automation
by Oneclick AI Squad
Automatically discovers trending topics in your niche and generates ready-to-use content ideas with AI. 🎯 How It Works 1. Multi-Source Trend Monitoring Twitter/X trending topics and hashtags Reddit hot posts from niche subreddits Google Trends daily search trends Runs every 2 hours for fresh opportunities 2. Smart Filtering & Scoring Filters by your niche keywords Removes duplicates across sources Calculates viral potential score (0-100) Ranks by engagement, recency, and relevance Prevents suggesting already-covered topics 3. AI Content Generation Uses Claude AI to analyze each trend Generates 5 unique content ideas per trend Provides hooks, key points, and platform recommendations Explains why each idea has viral potential 4. Comprehensive Delivery Beautiful HTML email digest with all opportunities Slack summary for quick review Database logging for tracking Research links for deeper investigation ⚙️ Configuration Guide Step 1: Configure Your Niche Edit the "Load Niche Config" node: niche: 'AI & Technology', // Your industry keywords: [ // Topics to track 'artificial intelligence', 'machine learning', 'AI tools', // Add your keywords ], subreddits: 'artificial+machinelearning', // Relevant subreddits thresholds: { minTwitterLikes: 1000, // Minimum engagement minRedditUpvotes: 500, minComments: 50 } Step 2: Connect Data Sources Twitter/X API: Sign up for Twitter Developer Account Get API credentials (OAuth 2.0) Add credentials to "Fetch Twitter/X Trends" node Reddit API: Create Reddit app: https://www.reddit.com/prefs/apps Get OAuth credentials Add credentials to "Fetch Reddit Hot Topics" node Google Trends: No authentication needed (public API) Already configured in workflow Step 3: Configure AI Integration Anthropic Claude API: Get API key from: https://console.anthropic.com/ Add credentials to "AI - Generate Content Ideas" node Alternative: Use OpenAI GPT-4 by modifying the node Step 4: Setup Notifications Email: Configure SMTP in "Send Email Digest" node Update recipient email address Customize HTML template if desired Slack: Create incoming webhook: https://api.slack.com/messaging/webhooks Add webhook URL to "Send Slack Summary" node Customize channel name Step 5: Database (Optional) Create PostgreSQL database with schema below Add credentials to "Log to Content Database" node Skip if you don't need database tracking Database Schema CREATE TABLE content.viral_opportunities ( id SERIAL PRIMARY KEY, opportunity_id VARCHAR(255) UNIQUE, detected_at TIMESTAMP, topic TEXT, source VARCHAR(50), source_url TEXT, engagement BIGINT, viral_score INTEGER, opportunity_level VARCHAR(20), niche VARCHAR(100), content_ideas JSONB, research_links JSONB, urgency TEXT, status VARCHAR(50), created_at TIMESTAMP DEFAULT NOW() ); 🎨 Customization Options Adjust Scan Frequency Edit "Every 2 Hours" trigger: More frequent: Every 1 hour Less frequent: Every 4-6 hours Consider API rate limits Tune Viral Score Algorithm Edit "Calculate Viral Potential Score" node: Adjust engagement weight (currently 40%) Change recency importance (currently 30%) Modify threshold in "Filter High Potential Only" (currently 40) Customize Content Ideas Modify the AI prompt in "AI - Generate Content Ideas": Change number of ideas (currently 5) Add specific format requirements Include brand voice guidelines Target specific platforms 📊 Expected Results Typical scan finds: 5-15 opportunities** per scan (2 hours) 3-5 HIGH priority** (score 75+) 25+ content ideas** generated Email sent** with full digest Slack alert** for quick review 💡 Pro Tips Timing Matters: Create content within 24-48 hours of detection High Priority First: Focus on opportunities scoring 75+ Platform Match: Choose platforms where your audience is active Add Your Voice: Use AI ideas as starting points, not final copy Track Performance: Note which opportunity types perform best Refine Keywords: Regularly update your niche keywords based on results Mix Formats: Try different content formats for same trend 🚨 Important Notes ⚠️ API Rate Limits: Twitter: Monitor rate limits closely Reddit: 60 requests per minute Claude AI: Tier-based limits Consider caching results 💰 Cost Considerations: Twitter API: May require paid tier Reddit API: Free for reasonable use Claude AI: ~$0.50-1.00 per scan Total: ~$15-30/month estimated 🎯 Best Practices: Start with 1-2 sources, add more later Test with broader keywords initially Review first few reports to tune scoring Don't create content for every opportunity Quality over quantity 🔄 What Happens Next? Workflow runs every 2 hours Scans Twitter, Reddit, Google Trends Filters by your keywords Scores viral potential Generates AI content ideas Sends digest to email + Slack Logs to database Marks topics as suggested Repeat!
by isaWOW
Automatically scan every PBN site in your Google Sheet, check if any removed client domain is still linked in the live HTML, and log all matches back into your tracking sheet — row by row, hands-free. What this workflow does Managing a network of PBN sites becomes risky the moment a client project gets offboarded. If the old link still exists on a PBN page, it can trigger a Google penalty, waste crawl budget, or simply point to a dead destination. Manually checking each PBN every time a project is removed is slow and error-prone. This workflow solves that completely. It reads your full PBN list from Google Sheets, skips any site you have already checked, then loops through each remaining PBN one at a time. For every site, it fetches the live HTML using an HTTP request, pulls your current list of offboarded project domains from a separate sheet, and runs a domain-matching check directly against that HTML. If a match is found, the workflow writes the detected domain into the "Offboarded Links" column automatically. It then pauses briefly and moves to the next PBN — no manual work required at any stage. Perfect for SEO agencies and link builders who manage multiple PBN networks and need a reliable, automatic way to keep their offboarding records up to date. Key features Automatic HTML scanning: Fetches the live page of every PBN site in real time using HTTP requests, so you are always checking the current version of the page — not a cached or outdated copy. Smart row filtering: Skips all PBN rows that have already been processed. The workflow only picks up rows where the "Offboarded Links" column is still empty, saving time and avoiding duplicate checks. Domain matching against offboarded projects: Pulls the complete list of removed client domains from your "offboard projects" sheet and checks each one against the fetched HTML. If any domain appears anywhere in the page source, it is flagged immediately. One-by-one loop with pause: Processes each PBN individually with a short pause between each iteration. This keeps the workflow stable, avoids rate-limit issues, and makes it easy to monitor progress in real time. Auto-update in Google Sheets: Writes the matched domain directly into your PBNs tracking sheet as soon as it is detected. No copy-paste, no manual entry — your records stay current automatically. How it works Trigger the workflow manually — Click the execute button to start the scan whenever you need to run a check across your PBN network. Read all PBN sites from Google Sheets — The workflow pulls every row from the "PBNs" sheet, which contains your site URLs, row numbers, and the "Offboarded Links" column. Filter out already-processed rows — A code node scans through all rows and removes any row where "Offboarded Links" is already filled in. Only unprocessed PBNs move forward. Loop through each PBN one by one — The remaining rows enter a loop. Each PBN is handled individually, one after the other, to keep the process clean and trackable. Fetch the live HTML of the PBN site — An HTTP request node sends a GET call to the PBN's URL and retrieves the full page HTML. Retries are enabled in case of temporary failures. Read the offboarded project domain list — A separate Google Sheets node pulls all domains from Column A of the "offboard projects" sheet. This is your master list of removed client websites. Match domains against the fetched HTML — A code node compares every domain from the offboarded list against the HTML content. If any domain is found, it is returned as a match. If none are found, the result is set to zero. Prepare the data for the sheet update — A Set node organizes the matched domain, the row number, and the HTML into a clean payload that is ready to write back into Google Sheets. Write the matched domain into the PBNs sheet — The workflow updates the correct row in the "PBNs" sheet by matching the row number and filling in the "Offboarded Links" column with the detected domain. Pause before the next iteration — A short wait is added after each update. This gives the system time to settle and prevents any issues before the loop continues to the next PBN. Setup requirements Tools you will need: Active n8n instance (self-hosted or n8n Cloud) Google Sheets with OAuth access for reading and updating PBN data A "PBNs" sheet containing your site URLs and tracking columns A separate "offboard projects" sheet with removed client domains in Column A Estimated setup time: 10–15 minutes Configuration steps 1. Add credentials in n8n: Google Sheets OAuth API — used for reading the PBNs sheet and the offboard projects sheet, and for writing matched results back. 2. Set up the PBNs sheet: Create a Google Sheet with the following columns: Site URL** — The full domain of each PBN site (e.g., example.com) Offboarded Links** — Leave this blank initially. The workflow will auto-fill this column when a match is found. row_number** — A unique number for each row. This is used by the workflow to identify and update the correct row. You can add this manually or use a simple formula. 3. Set up the offboard projects sheet: Create a second sheet (or a new tab) named "offboard projects" with the following structure: Column A** — List every client domain that has been offboarded or removed. Include the full URL format (e.g., https://clientsite.com). The workflow reads this column directly. 4. Update the Google Sheet URL in the workflow: Open the "Read PBN Sites from Sheet" node and paste your Google Sheet URL. Open the "Read Offboarded Project Domains" node and make sure the sheet tab is set to your "offboard projects" sheet. Open the "Write Matched Domain to PBNs Sheet" node and confirm it is pointing to the correct PBNs sheet and tab. 5. Run the workflow: Click the manual trigger to start. Watch the loop process each PBN one by one. Check your PBNs sheet — any matched offboarded domains will appear in the "Offboarded Links" column automatically. Use cases SEO agencies: Quickly audit the entire PBN network after a client project is offboarded. Instead of checking 50+ sites manually, run this workflow once and get a complete report in your Google Sheet within minutes. Link builders: Keep your PBN health records accurate at all times. Every time a project is removed, run this workflow to confirm whether any PBN still carries the old link before it causes issues. Freelance SEO consultants: Offer a professional offboarding audit as part of your service. This workflow handles the technical scanning, and you just need to review the results and take action on any flagged links. In-house SEO teams: Automate a task that used to take hours of manual checking. Run the scan on a regular schedule or whenever a project goes offline, and trust that your tracking sheet stays up to date without any extra effort. Customization options Add more domains to check: Simply add new rows to your "offboard projects" sheet in Column A. The workflow will automatically pick them up the next time it runs — no changes needed inside n8n. Run on a schedule instead of manually: Replace the manual trigger with a Schedule Trigger node. Set it to run daily, weekly, or at any interval that fits your workflow. Add error notifications: Connect a notification node (such as Slack, Email, or Discord) after the loop to alert you when the workflow finishes or when a match is detected. This way you do not need to check the sheet yourself. Reset and re-scan: If you need to re-check all PBN sites from scratch, simply clear the "Offboarded Links" column in your PBNs sheet. The filter node will treat all rows as unprocessed and scan them again on the next run. Troubleshooting HTTP request fails or returns no data: Check that the PBN site URL in your sheet is correct and the site is currently live. The workflow has retry enabled, but if the site is down or blocked, the request will still fail. You can skip those rows manually or add an error-handling branch. No matches found even though a link exists: Make sure the domain in your "offboard projects" sheet matches exactly what appears in the PBN's HTML. For example, if the HTML contains "www.clientsite.com" but your sheet only has "clientsite.com," the match will still work because the code strips "www." However, double-check for any spelling differences or trailing slashes. Offboarded Links column not updating: Verify that the row_number in your PBNs sheet is correct and unique for each row. The update node uses row_number to find and write to the right row. If row_number is missing or duplicated, the update will fail silently. Workflow stops mid-loop: This can happen if Google Sheets returns a rate-limit error. The pause node helps reduce this, but if it still occurs, increase the wait time in the "Pause Before Next Iteration" node or reduce the batch size. Filter skips rows that should be processed: Check if the "Offboarded Links" column has any hidden spaces or formatting. Even an empty-looking cell with a space character will be treated as filled. Clear those cells completely and run the workflow again. Important notes This workflow is designed to be run manually or on a schedule. It does not trigger automatically when a new project is offboarded — you need to initiate the scan yourself. The HTTP request fetches the live HTML of each PBN site. Make sure you have permission or ownership of the sites you are scanning to avoid any access issues. The domain matching is based on simple text inclusion in the HTML source. It will detect domains in links, text content, meta tags, or anywhere else they appear on the page. If you need stricter matching (e.g., only in anchor href tags), the code node can be updated accordingly. Keep your "offboard projects" sheet updated regularly. The workflow only checks domains that are listed there at the time of the scan. Support Need help or custom development? 📧 Email: info@isawow.com 🌐 Website: https://isawow.com/
by Cheng Siong Chin
How It Works This workflow automates comprehensive data validation and regulatory compliance reporting through intelligent AI-driven analysis. Designed for compliance officers, data governance teams, and regulatory affairs departments, it solves the critical challenge of ensuring data quality while generating audit-ready compliance documentation across multiple regulatory frameworks.The system receives data through webhook triggers, performs multi-layered validation using AI models to detect anomalies and policy violations, and intelligently routes findings based on validation outcomes. It orchestrates parallel processing streams for content lookup, retention policy enforcement, and rejection handling. The workflow merges validation results, generates governance documentation, and manages compliance notifications through multiple channels. By automating action routing based on compliance status and maintaining detailed audit logs across validation, governance, and action streams, it ensures regulatory adherence while eliminating manual review bottlenecks. Setup Steps Configure Data Ingestion Webhook trigger endpoint Connect Workflow Execution Configuration node with validation parameters Set up Fetch Validation Rules node with OpenAI/Nvidia API credentials for AI model access Configure parallel AI model nodes with respective API credentials Connect Route by Validation Status node with branching logic Set up Governance Documentation node with document template configurations Configure parallel action nodes Prerequisites OpenAI/Nvidia/Anthropic API credentials for AI validation models Use Cases Financial institutions ensuring transaction compliance monitoring, Customization Adjust AI model parameters for industry-specific compliance rules Benefits Reduces compliance review time by 80%, eliminates manual validation errors