by Omer Fayyaz
This n8n template implements a Calendly Booking Link Generator that creates single-use, personalized booking links, logs them to Google Sheets, and optionally notifies a Slack channel Who's it for This template is designed for teams and businesses that send Calendly links proactively and want to generate trackable, single-use booking links on demand. It’s perfect for: Sales and SDR teams** sending 1:1 outreach and needing unique booking links per prospect Customer success and support teams** who want prefilled, one-click rescheduling or follow-up links Marketing and growth teams** that want UTM-tagged booking links for campaigns Ops/RevOps** who need a central log of every generated link for tracking and reporting How it works / What it does This workflow turns a simple HTTP request into a fully configured single-use Calendly booking link: Webhook Trigger (POST) Receives JSON payload with recipient details: name, email, optional event_type_uri, optional utm_source Configuration & Input Normalization Set Configuration extracts and normalizes: recipient_name, recipient_email requested_event_type (can be empty) utm_source (defaults to "n8n" if not provided) Calendly API – User & Event Types Get Current User calls GET /users/me using Calendly OAuth2 to get the current user URI Extract User stores user_uri and user_name Get Event Types calls GET /event_types?user={user_uri}&active=true to fetch active event types Select Event Type: Uses requested_event_type if provided, otherwise selects the first active event type Stores event type URI, name, and duration (minutes) Create Calendly Single-Use Scheduling Link Create Single-Use Link calls POST /scheduling_links with: owner: selected event type URI owner_type: "EventType" max_event_count: 1 (single use) Build Personalized Booking URL Build Personalized Link: Reads the base booking_url from Calendly Appends query parameters to prefill: name (encoded) email (encoded) utm_source Stores: base_booking_url personalized_booking_url recipient_name, recipient_email event_type_name, event_duration link_created_at (ISO timestamp) Optional Logging and Notifications Log to Google Sheets (optional but preconfigured): Appends each generated link to a “Generated Links” sheet Columns: Recipient Name, Recipient Email, Event Type, Duration (min), Booking URL, Created At, Status Notify via Slack (optional): Posts a nicely formatted Slack message with: recipient name & email event name & duration clickable booking link API Response to Caller Respond to Webhook returns a structured JSON response: success booking_url (personalized) base_url recipient object event object (name + duration) created_at expires explanation ("Single-use or 90 days") The result is an API-style service you can call from any system to generate trackable, single-use Calendly links. How to set up 1. Calendly OAuth2 setup Go to calendly.com/integrations or developer.calendly.com Create an OAuth2 application (or use an existing one) In n8n, create Calendly OAuth2 credentials: Add client ID, client secret, and redirect URL as required by Calendly Connect your Calendly user account In the workflow, make sure all Calendly HTTP Request nodes use your Calendly OAuth2 credential 2. Webhook Trigger configuration Open the Webhook Trigger node Confirm: HTTP Method: POST Path: generate-calendly-link Response Mode: Response Node (points to Respond to Webhook) Copy the Production URL from the node once the workflow is active Use this URL as the endpoint for your CRM, outbound tool, or any system that needs to request links Expected request body: { "name": "John Doe", "email": "john@example.com", "event_type_uri": "optional", "utm_source": "optional" } If event_type_uri is not provided, the workflow automatically uses the first active event type for the current Calendly user. 3. Google Sheets setup (optional but recommended) Create a Google Sheet for tracking links Add a sheet/tab named e.g. “Generated Links” Set the header row to: Recipient Name, Recipient Email, Event Type, Duration (min), Booking URL, Created At, Status In n8n: Create Google Sheets OAuth2 credentials Open the Log to Google Sheets node Update: documentId → your spreadsheet ID sheetName → your tab name (e.g. “Generated Links”) 4. Slack notification setup (optional) Create a Slack app at api.slack.com Add Bot Token scopes (for basic posting): chat:write channels:read (or groups:read if posting to private channels) Install the app to your workspace and get the Bot User OAuth Token In n8n: Create a Slack API credential using the bot token Open the Notify via Slack node Select your credential Set: select: channel channelId: your desired channel (e.g. #sales or #booking-links) 5. Test the workflow end-to-end Activate the workflow Use Postman, curl, or another system to POST to the webhook URL, e.g.: { "name": "Test User", "email": "test@example.com" } Verify: The HTTP response contains a valid booking_url A new row is added to your Google Sheet (if configured) A Slack notification is posted (if configured) Requirements Calendly account* with at least one *active event type** n8n instance** (cloud or self-hosted) with public access for the webhook Calendly OAuth2 credentials** configured in n8n (Optional) Google Sheets account and OAuth2 credentials (Optional) Slack workspace with permissions to install a bot and post to channels How to customize the workflow Input & validation Update the Set Configuration node to: Enforce required fields (e.g. fail if email is missing) Add more optional parameters (e.g. utm_campaign, utm_medium, language) Add an IF node after the Webhook Trigger for stricter validation and custom error responses Event type selection logic In Select Event Type: Change the fallback selection rule (e.g. pick the longest or shortest duration event) Add logic to map a custom field (like event_key) to specific event type URIs Link parameters & tracking In Build Personalized Link: Add additional query parameters (e.g. utm_campaign, source, segment) Remove or rename existing parameters if needed If you don’t want prefilled name/email, remove those query parameters and just keep tracking fields Google Sheets logging Extend the Log to Google Sheets mapping to include: utm_source or other marketing attributes Sales owner, campaign name, or pipeline stage Any additional fields you compute in previous nodes Slack notification formatting In Notify via Slack: Adjust the message text to your team’s tone Add emojis or @mentions for certain event types Include utm_source or other metadata for debugging and tracking Key features Single-use Calendly links** – each generated link is limited to one booking (or expires after ~90 days) Prefilled recipient details** – name and email are embedded in the URL, making it frictionless to book Webhook-first design** – easily call this from CRMs, outreach tools, or any external system Central link logging** – every link is stored in Google Sheets for auditing and reporting Optional Slack alerts** – keep sales/support teams notified when new links are generated Safe error handling** – HTTP nodes are configured with continueRegularOutput to avoid hard workflow failures Example scenarios Scenario 1: Sales outreach A CRM workflow triggers when a lead moves to “Meeting Requested”. It calls this n8n webhook with the lead’s name and email. The workflow generates a single-use Calendly link, logs it to Sheets, and posts to Slack. The CRM sends an email to the lead with the personalized booking link. Scenario 2: Automated follow-up link A support ticket is resolved and the system wants to offer a follow-up call. It calls the webhook with name, email, and a dedicated event_type_uri for “Follow-up Call”. The generated link is logged and returned via API, then included in an automated email. Scenario 3: Campaign tracking A marketing automation tool triggers this webhook for each contact in a campaign, passing utm_source (e.g. q1-outbound). The workflow adds utm_source to the link and logs it in Google Sheets. Later, you can analyze which campaigns generated the most completed bookings from single-use links. This template gives you a reliable, reusable Calendly link generation service that plugs into any part of your stack, while keeping tracking, logging, and team visibility fully automated.
by Jitesh Dugar
Revolutionize university admissions with intelligent AI-driven application evaluation that analyzes student profiles, calculates eligibility scores, and automatically routes decisions - saving 2.5 hours per application and reducing decision time from weeks to hours. 🎯 What This Workflow Does Transforms your admissions process from manual application review to intelligent automation: 📝 Captures Applications - Jotform intake with student info, GPA, test scores, essay, extracurriculars 🤖 AI Holistic Evaluation - OpenAI analyzes academic strength, essay quality, extracurriculars, and fit 🎯 Intelligent Scoring - Evaluates students using 40% academics, 25% extracurriculars, 20% essay, 15% fit (0-100 scale) 🚦 Smart Routing - Automatically routes based on AI evaluation: Auto-Accept (95-100)**: Acceptance letter with scholarship details → Admin alert → Database Interview Required (70-94)**: Interview invitation with scheduling link → Admin alert → Database Reject (<70)**: Respectful rejection with improvement suggestions → Database 💰 Scholarship Automation - Calculates merit scholarships ($5k-$20k+) based on eligibility score 📊 Analytics Tracking - All applications logged to Google Sheets for admissions insights ✨ Key Features AI Holistic Evaluation: Comprehensive analysis weighing academics, extracurriculars, essays, and institutional fit Intelligent Scoring System: 0-100 eligibility score with automated categorization and scholarship determination Structured Output: Consistent JSON schema with academic strength, admission likelihood, and decision reasoning Automated Communication: Personalized acceptance, interview, and rejection letters for every applicant Fallback Scoring: Manual GPA/SAT scoring if AI fails - ensures zero downtime Admin Alerts: Instant email notifications for exceptional high-scoring applicants (95+) Comprehensive Analytics: Track acceptance rates, average scores, scholarship distribution, and applicant demographics Customizable Criteria: Easy prompt editing to match your institution's values and requirements 💼 Perfect For Universities & Colleges: Processing 500+ undergraduate applications per semester Graduate Programs: Screening master's and PhD applications with consistent evaluation Private Institutions: Scaling admissions without expanding admissions staff Community Colleges: Handling high-volume transfer and new student applications International Offices: Evaluating global applicants 24/7 across all timezones Scholarship Committees: Identifying merit scholarship candidates automatically 🔧 What You'll Need Required Integrations Jotform - Application form with student data collection (free tier works) Create your form for free on Jotform using this link Create your application form with fields: Name, Email, Phone, GPA, SAT Score, Major, Essay, Extracurriculars OpenAI API - GPT-4o-mini for cost-effective AI evaluation (~$0.01-0.05 per application) Gmail - Automated applicant communication (acceptance, interview, rejection letters) Google Sheets - Application database and admissions analytics Optional Integrations Slack - Real-time alerts for exceptional applicants Calendar APIs - Automated interview scheduling Student Information System (SIS) - Push accepted students to enrollment system Document Analysis Tools - OCR for transcript verification 🚀 Quick Start Import Template - Copy JSON and import into n8n (requires LangChain support) Create Jotform - Use provided field structure (Name, Email, GPA, SAT, Major, Essay, etc.) Add API Keys - OpenAI, Jotform, Gmail OAuth2, Google Sheets Customize AI Prompt - Edit admissions criteria with your university's specific requirements and values Set Score Thresholds - Adjust auto-accept (95+), interview (70-94), reject (<70) cutoffs if needed Personalize Emails - Update templates with your university branding, dates, and contact info Create Google Sheet - Set up columns: id, Name, Email, GPA, SAT Score, Major, Essay, Extracurriculars Test & Deploy - Submit test application with pinned data and verify all nodes execute correctly 🎨 Customization Options Adjust Evaluation Weights: Change academics (40%), extracurriculars (25%), essay (20%), fit (15%) percentages Multiple Programs: Clone workflow for different majors with unique evaluation criteria Add Document Analysis: Integrate OCR for transcript and recommendation letter verification Interview Scheduling: Connect Google Calendar or Calendly for automated booking SIS Integration: Push accepted students directly to Banner, Ellucian, or PeopleSoft Waitlist Management: Add conditional routing for borderline scores (65-69) Diversity Tracking: Include demographic fields and bias detection in AI evaluation Financial Aid Integration: Automatically calculate need-based aid eligibility alongside merit scholarships 📈 Expected Results 90% reduction in manual application review time (from 2.5 hours to 15 minutes per application) 24-48 hour decision turnaround time vs 4-6 weeks traditional process 40% higher yield rate - faster responses increase enrollment commitment 100% consistency - every applicant evaluated with identical criteria Zero missed applications - automated tracking ensures no application falls through cracks Data-driven admissions - comprehensive analytics on applicant pools and acceptance patterns Better applicant experience - professional, timely communication regardless of decision Defensible decisions - documented scoring criteria for accreditation and compliance 🏆 Use Cases Large Public Universities Screen 5,000+ applications per semester, identify top 20% for auto-admit, route borderline to committee review. Selective Private Colleges Evaluate 500+ highly competitive applications, calculate merit scholarships automatically, schedule interviews with top candidates. Graduate Programs Process master's and PhD applications with research experience weighting, flag candidates for faculty review, automate fellowship awards. Community Colleges Handle high-volume open enrollment while identifying honors program candidates and scholarship recipients instantly. International Admissions Evaluate global applicants 24/7, account for different GPA scales and testing systems, respond same-day regardless of timezone. Rolling Admissions Provide instant decisions for early applicants, fill classes strategically, optimize scholarship budget allocation. 💡 Pro Tips Calibrate Your AI: After 100+ applications, refine evaluation criteria based on enrolled student success A/B Test Thresholds: Experiment with score cutoffs (e.g., 93 vs 95 for auto-admit) to optimize yield Build Waitlist Pipeline: Keep 70-84 score candidates engaged for spring enrollment or next year Track Source Effectiveness: Add UTM parameters to measure which recruiting channels deliver best students Committee Review: Route 85-94 scores to human admissions committee for final review Bias Audits: Quarterly review of AI decisions by demographic groups to ensure fairness Parent Communication: Add parent/guardian emails for admitted students under 18 Financial Aid Coordination: Sync scholarship awards with financial aid office for packaging 🎓 Learning Resources This workflow demonstrates: AI Agents with structured output** - LangChain integration for consistent JSON responses Multi-stage conditional routing** - IF nodes for three-tier decision logic Holistic evaluation** - Weighted scoring across multiple dimensions Automated communication** - HTML email templates with dynamic content Real-time notifications** - Admin alerts for high-value applicants Analytics and data logging** - Google Sheets integration for reporting Fallback mechanisms** - Manual scoring when AI unavailable Perfect for learning advanced n8n automation patterns in educational technology! 🔐 Compliance & Ethics FERPA Compliance: Protects student data with secure credential handling Fair Admissions: Documented criteria eliminate unconscious bias Human Oversight: Committee review option for borderline cases Transparency: Applicants can request evaluation criteria Appeals Process: Structured workflow for decision reconsideration Data Retention: Configurable Google Sheets retention policies 📊 What Gets Tracked Application submission date and time Complete student profile (GPA, test scores, major, essay, activities) AI eligibility score (0-100) and decision category Academic strength rating (excellent/strong/average) Scholarship eligibility and amount ($0-$20,000+) Admission likelihood (high/medium/low) Decision outcome (accepted/interview/rejected) Email delivery status and open rates Time from application to decision Ready to transform your admissions process? Import this template and start evaluating applications intelligently in under 1 hour. Questions or customization needs? The workflow includes detailed sticky notes explaining each section and comprehensive fallback logic for reliability.
by Onur
🏠 Extract Zillow Property Data to Google Sheets with Scrape.do This template requires a self-hosted n8n instance to run. A complete n8n automation that extracts property listing data from Zillow URLs using Scrape.do web scraping API, parses key property information, and saves structured results into Google Sheets for real estate analysis, market research, and property tracking. 📋 Overview This workflow provides a lightweight real estate data extraction solution that pulls property details from Zillow listings and organizes them into a structured spreadsheet. Ideal for real estate professionals, investors, market analysts, and property managers who need automated property data collection without manual effort. Who is this for? Real estate investors tracking properties Market analysts conducting property research Real estate agents monitoring listings Property managers organizing data Data analysts building real estate databases What problem does this workflow solve? Eliminates manual copy-paste from Zillow Processes multiple property URLs in bulk Extracts structured data (price, address, zestimate, etc.) Automates saving results into Google Sheets Ensures repeatable & consistent data collection ⚙️ What this workflow does Manual Trigger → Starts the workflow manually Read Zillow URLs from Google Sheets → Reads property URLs from a Google Sheet Scrape Zillow URL via Scrape.do → Fetches full HTML from Zillow (bypasses PerimeterX protection) Parse Zillow Data → Extracts structured property information from HTML Write Results to Google Sheets → Saves parsed data into a results sheet 📊 Output Data Points | Field | Description | Example | |-------|-------------|---------| | URL | Original Zillow listing URL | https://www.zillow.com/homedetails/... | | Price | Property listing price | $300,000 | | Address | Street address | 8926 Silver City | | City | City name | San Antonio | | State | State abbreviation | TX | | Days on Zillow | How long listed | 5 | | Zestimate | Zillow's estimated value | $297,800 | | Scraped At | Timestamp of extraction | 2025-01-29T12:00:00.000Z | ⚙️ Setup Prerequisites n8n instance (self-hosted) Google account with Sheets access Scrape.do account with API token (Get 1000 free credits/month) Google Sheet Structure This workflow uses one Google Sheet with two tabs: Input Tab: "Sheet1" | Column | Type | Description | Example | |--------|------|-------------|---------| | URLs | URL | Zillow listing URL | https://www.zillow.com/homedetails/123... | Output Tab: "Results" | Column | Type | Description | Example | |--------|------|-------------|---------| | URL | URL | Original listing URL | https://www.zillow.com/homedetails/... | | Price | Text | Property price | $300,000 | | Address | Text | Street address | 8926 Silver City | | City | Text | City name | San Antonio | | State | Text | State code | TX | | Days on Zillow | Number | Days listed | 5 | | Zestimate | Text | Estimated value | $297,800 | | Scraped At | Timestamp | When scraped | 2025-01-29T12:00:00.000Z | 🛠 Step-by-Step Setup Import Workflow: Copy the JSON → n8n → Workflows → + Add → Import from JSON Configure Scrape.do API: Sign up at Scrape.do Dashboard Get your API token In HTTP Request node, replace YOUR_SCRAPE_DO_TOKEN with your actual token The workflow uses super=true for premium residential proxies (10 credits per request) Configure Google Sheets: Create a new Google Sheet Add two tabs: "Sheet1" (input) and "Results" (output) In Sheet1, add header "URLs" in cell A1 Add Zillow URLs starting from A2 Set up Google Sheets OAuth2 credentials in n8n Replace YOUR_SPREADSHEET_ID with your actual Google Sheet ID Replace YOUR_GOOGLE_SHEETS_CREDENTIAL_ID with your credential ID Run & Test: Add 1-2 test Zillow URLs in Sheet1 Click "Execute workflow" Check results in Results tab 🧰 How to Customize Add more fields**: Extend parsing logic in "Parse Zillow Data" node to capture additional data (bedrooms, bathrooms, square footage) Filtering**: Add conditions to skip certain properties or price ranges Rate Limiting**: Insert a Wait node between requests if processing many URLs Error Handling**: Add error branches to handle failed scrapes gracefully Scheduling**: Replace Manual Trigger with Schedule Trigger for automated daily/weekly runs 📊 Use Cases Investment Analysis**: Track property prices and zestimates over time Market Research**: Analyze listing trends in specific neighborhoods Portfolio Management**: Monitor properties for sale in target areas Competitive Analysis**: Compare similar properties across locations Lead Generation**: Build databases of properties matching specific criteria 📈 Performance & Limits Single Property**: ~5-10 seconds per URL Batch of 10**: 1-2 minutes typical Large Sets (50+)**: 5-10 minutes depending on Scrape.do credits API Calls**: 1 Scrape.do request per URL (10 credits with super=true) Reliability**: 95%+ success rate with premium proxies 🧩 Troubleshooting | Problem | Solution | |---------|----------| | API error 400 | Check your Scrape.do token and credits | | URL showing "undefined" | Verify Google Sheet column name is "URLs" (capital U) | | No data parsed | Check if Zillow changed their HTML structure | | Permission denied | Re-authenticate Google Sheets OAuth2 in n8n | | 50000 character error | Verify Parse Zillow Data code is extracting fields, not returning raw HTML | | Price shows HTML/CSS | Update price extraction regex in Parse Zillow Data node | 🤝 Support & Community Scrape.do Documentation Scrape.do Dashboard Scrape.do Zillow Scraping Guide n8n Forum n8n Docs 🎯 Final Notes This workflow provides a repeatable foundation for extracting Zillow property data with Scrape.do and saving to Google Sheets. You can extend it with: Historical tracking (append timestamps) Price change alerts (compare with previous scrapes) Multi-platform scraping (Redfin, Realtor.com) Integration with CRM or reporting dashboards Important: Scrape.do handles all anti-bot bypassing (PerimeterX, CAPTCHAs) automatically with rotating residential proxies, so you only pay for successful requests. Always use super=true parameter for Zillow to ensure high success rates.
by Abdul Mir
Overview Stop spending hours formatting proposals. This workflow turns a short post-call form into a high-converting, fully-personalized PandaDoc proposal—plus updates your CRM and drafts the follow-up email for you. After a sales call, just fill out a 3-minute form summarizing key pain points, solutions pitched, and the price. The workflow uses AI to generate polished proposal copy, then builds a PandaDoc draft using dynamic data mapped into the JSON body (which you can fully customize per business). It also updates the lead record in ClickUp with the proposal link, company name, and quote—then creates an email draft in Gmail, ready to send. Who’s it for Freelancers and consultants sending service proposals Agencies closing deals over sales calls Sales reps who want to automate proposal follow-up Teams using ClickUp as their lightweight CRM How it works After a call, fill out a short form with client details, pitch notes, and price AI generates professional proposal copy based on form input Proposal is formatted and sent to PandaDoc via HTTP request ClickUp lead is updated with: Company Name Proposal URL Quote/price A Gmail draft is created using the proposal link and a thank-you message Example use case > You hop off a call, fill out: > - Prospect: Shopify agency > - Pain: No lead gen system > - Solution: Automated cold outreach > - Price: $2,500/month > > 3 minutes later: PandaDoc proposal is ready, CRM is updated, and your email draft is waiting to be sent. How to set up Replace the form with your preferred tool (e.g. Tally, Typeform) Connect PandaDoc API and structure your proposal template Customize the JSON body inside the HTTP request to match your business Link your ClickUp space and custom fields Connect Gmail (or other email tool) for final follow-up draft Requirements Form tool for capturing sales call notes OpenAI or LLM key for generating proposal copy PandaDoc API access ClickUp custom fields set up for lead tracking Gmail integration How to customize Customize your PandaDoc proposal fields in the JSON body of the HTTP node Replace ClickUp with another CRM like HubSpot or Notion Adjust AI tone (casual, premium, corporate) for proposal writing Add Slack or Telegram alerts when the draft is ready Add PDF generation or auto-send email step
by Meak
Auto-Edit Google Drive Images with Nano Banana + Social Auto-Post Most businesses spend hours cleaning up photos and manually posting them to social media. This workflow does it all automatically: image enhancement, caption creation, and posting — directly from a simple Google Drive upload. Benefits Clean & enhance images instantly with Nano Banana Auto-generate catchy captions with GPT-5 Post directly to Instagram (or other social channels) Track everything in Google Sheets Save hours per week on repetitive content tasks How It Works Upload image to Google Drive Workflow sends image to Nano Banana (via Wavespeed API) Waits for enhanced version and logs URL in Google Sheets Uploads result to Postiz media library GPT-5 writes an engaging caption Publishes post instantly or schedules for later Who Is This For Real estate agents posting property photos E-commerce sellers updating product images Social media managers handling multiple accounts Setup Connect Google Drive (select upload folder) Add Wavespeed API key for Nano Banana Connect Google Sheets for logging Add Postiz API credentials & integration ID Enter OpenAI API key for GPT-5 captioning ROI & Monetization Save 5–10 hours per week of manual editing and posting Offer as a $1k–$3k/month content automation service for clients Scale to multi-platform posting (TikTok, LinkedIn) for premium retainers Strategy Insights In the full walkthrough, I show how to: Build this workflow step by step Pitch it as a “Done-For-You Social Posting System” Automate outreach to agencies and creators who need it Turn this into recurring revenue with retainers Check Out My Channel For more advanced AI automation systems that generate real business results, check out my YouTube channel where I share the exact strategies I use to build automation agencies, sell high-value services, and scale to $20k+ monthly revenue.
by Abdullah Alshiekh
🧩 What Problem Does It Solve? In real estate, inquiries come from many sources and often require immediate, personalized attention. Brokers waste significant time manually: Qualifying leads:** Determining if a prospect's budget, neighborhood, and needs match available inventory. Searching listings:** Cross-referencing customer criteria against a large, static database. Data entry:** Moving contact details and search summaries into a CRM like Zoho. Initial follow-up:** Sending an email to confirm the submission and schedule the next step. 🛠️ How to Configure It Jotform & CRM Setup Jotform Trigger:** Replace the placeholder with your specific Jotform ID. Zoho CRM:** Replace the placeholder TEMPLATED_COMPANY_NAME with your actual company name. Gmail:** Replace the placeholder Calendly link YOUR_CALENDLY_LINK in the Send a message node with your real estate consultant's booking link. Database & AI Setup Google Sheets:** Replace YOUR_GOOGLE_SHEET_DOCUMENT_ID and YOUR_SHEET_GID_OR_NAME in both Google Sheets nodes. Your listings must be structured with columns matching the AI prompt (e.g., bedrooms, rent, neighborhoods). AI Models:** Ensure your Google Gemini API key is linked to the Google Gemini Chat Model node. AI Agent Prompt:** The included prompt contains the exact matching and scoring rules for the AI. You can edit this prompt to refine how the AI prioritizes factors like supplier_rating or neighborhood proximity. 🧠 Use Case Examples Small Startups:** Collect High-Quality Leads: New inquiries must be quickly logged for sales follow-up, but manual entry is slow. B2B Sales:** High-Value Lead Enrichment: Need to prioritize leads that match specific product requirements and budget tiers. Travel/Hospitality:** Personalized Itinerary Matching: Quickly match customer preferences (e.g., dates, group size, activity level) to available packages. E-commerce:** Manual Product Recommendation: Sales teams manually recommend expensive, configurable items (e.g., furniture, specialized equipment). If you need any help Get in Touch
by Bakir Ali
Automated BBB Lead Generation with BrowserAct 🚀 Overview This workflow automates business data extraction, duplicate checking, and email outreach using BrowserAct, Google Sheets, Gmail, and Google Gemini AI — all inside n8n. It’s designed for marketers, lead generation specialists, or automation developers who want to build a fully autonomous AI agent that finds businesses online, filters duplicates, and automatically sends personalized outreach emails. 🧩 Key Features 🌐 BrowserAct Integration — Scrapes business data (name, phone, email, website, rating) from any target site. 🤖 AI Data Extraction Agent — Uses Google Gemini AI to clean, structure, and validate scraped data into standardized JSON. 📊 Google Sheets Sync — Reads all existing records Checks for duplicates Appends new rows automatically ✉️ Automated Gmail Outreach — Validates email addresses Sends outreach emails to valid leads Logs each status (e.g., Successful, Duplicate, Pending - Invalid Email) ⏳ Smart Delay Control — Uses Wait node to pause execution and respect email sending limits (max 2 emails per run). 🛠️ Included Nodes | Node | Function | | -------------------------- | ------------------------------------------------- | | 🕓 Schedule Trigger | Runs the workflow automatically on schedule | | 🌍 BrowserAct | Scrapes or extracts business data | | ⚙️ If Node | Checks scraping results before processing | | 🧠 AI Agent (Gemini) | Extracts structured business info | | 💻 Code (JavaScript) | Cleans and parses AI output into usable JSON | | 📩 AI Agent 2 (Gemini) | Handles decision-making for email + sheet updates | | 📊 Google Sheets Tools | Reads, appends, and manages lead data | | 📨 Gmail Node | Sends automated outreach emails | | ⏱️ Wait Node | Adds delay to control workflow speed | 🧾 How It Works Schedule Trigger starts the automation. BrowserAct fetches business listings based on defined keywords and location. AI Agent (Gemini) extracts business details (business_name, website_url, phone_number, email_address, rating). JavaScript Code Node parses the AI’s JSON response. AI Agent 2 (Gemini) decides: If duplicate → send message on your email address Duplicate data found If invalid email → marks as “Pending - Invalid Email” If valid email → sends via Gmail + updates Google Sheet Final output returns structured statuses for each processed business. 🖼️ Workflow Diagram > * Schedule Trigger > * BrowserAct > * AI Agent (Gemini) > * JavaScript Code > * Gmail & Google Sheets tools ![Workflow Preview] ⚙️ Setup Instructions Connect your BrowserAct, Google Sheets, Gmail, and Google Gemini API credentials. Define search keywords and locations inside the BrowserAct node. Set your Google Sheet ID in the relevant nodes. Customize the Gmail message if needed. Activate the workflow and schedule it. 📤 Output Example [ { "business_name": "ABC Restaurant", "email_sent": "Successful" }, { "business_name": "XYZ Foods", "email_sent": "Duplicate - Already Exist" }, { "business_name": "Fresh Eats", "email_sent": "Pending - Invalid Email" } ] 👨💻 Created by Bakir Ali Automation & AI Workflow Creator — specialized in BrowserAct, Google AI (Gemini), and n8n-based automation systems.
by Adam Goodyer
Video Digestion Workflow — n8n Template Description How it works This workflow takes any YouTube video URL and automatically extracts a rich, structured analysis — including transcript, key visual moments, video metadata, SEO keywords, and content section breakdowns. It's designed as the foundation layer for content repurposing, feeding its output into downstream workflows for creating Shorts, LinkedIn posts, Twitter threads, blog articles, email newsletters, and more. The pipeline: YouTube URL Input — A simple form trigger accepts any YouTube video URL. Video Download (Apify) — Downloads the video file at 720p via the Apify YouTube Video Downloader actor. Transcript Extraction (Apify) — Pulls the full transcript with timestamps from YouTube using the Apify YouTube Video Transcript actor. No audio processing needed — fast and reliable. Data Consolidation — A Code node merges both Apify outputs into a single structured object containing: video URL, transcript text, timestamped segments, video metadata (title, description, duration, channel info, like/comment counts, thumbnail, publish date). Visual Analysis (Google Gemini Pro) — Sends the actual video to Gemini's video analysis endpoint, which watches the entire video and identifies key B-roll moments with precise timestamps, app detection, and webcam overlay awareness. It categorises clips as clean screen recordings vs. webcam overlays vs. talking head segments. Key Action Parsing — Filters and categorises the Gemini output into usable clips, removing talking-head-only segments and incomplete data. Outputs chronologically sorted clips with cropping metadata for downstream video editing. AI Section Analysis (OpenAI) — Sends the transcript + key moments to OpenAI with structured output (JSON schema) to generate: video summary, one-liner, main argument, target audience, content style, tone, key takeaways, problems addressed, tools mentioned, frameworks explained, suggested titles, and SEO keywords. Output — The final structured payload is ready to pass to any downstream workflow (e.g., Shorts creation, social media posting, blog generation). Setup guide Required accounts & API keys You'll need API credentials for the following services: | Service | What it does | Sign up | |---------|-------------|---------| | Apify | YouTube video downloading + transcript extraction | https://apify.com | | Google AI Studio (Gemini) | Video analysis — watches the video and detects key visual moments | https://aistudio.google.com | | OpenAI | Structured content analysis with JSON schema output | https://platform.openai.com | Required Apify actors You need to add these two Apify actors to your account: YouTube Video Downloader by epctex — https://apify.com/epctex/youtube-video-downloader YouTube Video Transcript by starvibe — https://apify.com/starvibe/youtube-video-transcript n8n credentials to configure Apify API** — Add your Apify API token in n8n credentials Google Gemini** — Add your Google AI Studio API key in n8n credentials OpenAI** — Add your OpenAI API key in n8n credentials Steps Import the workflow into n8n Configure all three credential sets (Apify, Gemini, OpenAI) Ensure both Apify actors are added to your Apify account Activate the workflow Open the form trigger URL and paste any YouTube video URL The workflow outputs a comprehensive JSON payload ready for downstream workflows What you can build with the output The structured output from this workflow is designed to be piped into other workflows. Some ideas: YouTube Shorts creation** — Use the key moments + timestamps to auto-clip and render short-form content LinkedIn carousel posts** — Pull key takeaways and section summaries Twitter/X threads** — Convert section breakdowns into threaded posts Blog articles** — Use the full transcript + structure as a draft foundation Email newsletters** — Summarise the video for your subscriber list SEO-optimised descriptions** — Auto-generate YouTube descriptions with keywords Nodes used Form Trigger (n8n built-in) Apify (x2 — video download + transcript) Code (x2 — data consolidation + key action parsing) Google Gemini (video analysis) OpenAI (structured content analysis with JSON schema) Edit Fields (data mapping) Execute Workflow (optional — calls downstream Shorts creation workflow) Built by @adamfreelances — The Anti-Guru Technical Educator. Real workflows, real implementation, no fluff.
by Intuz
This n8n template from Intuz provides a complete solution to automate on-demand lead generation. It acts as a powerful scraping agent that takes a simple chat query, scours both Google Search and Google Maps for relevant businesses, scrapes their websites for contact details, and compiles an enriched lead list directly in Google Sheets. Who's this workflow for? Sales Development Representatives (SDRs) Local Marketing Agencies Business Development Teams Freelancers & Consultants Market Researchers How it works 1. Start with a Chat Query: The user initiates the workflow by typing a search query (e.g., "dentists in New York") into a chat interface. 2. Multi-Source Search: The workflow queries both the Google Custom Search API (for web results across multiple pages) and scrapes Google Maps (for local businesses) to gather a broad list of potential leads. 3. Deep Dive Website Scraping: For each unique business website found, the workflow visits the URL to scrape the raw HTML content of the page. 4. Intelligent Contact Extraction: Using custom code, it then parses the scraped website content to find and extract valuable contact information like email addresses, phone numbers, and social media links. 5. Deduplicate and Log to Sheets: Before saving, the workflow checks your Google Sheet to ensure the lead doesn't already exist. All unique, newly enriched leads are then appended as clean rows to your sheet, along with the original search query for tracking. Key Requirements to Use This Template 1. n8n Instance & Required Nodes: An active n8n account (Cloud or self-hosted). This workflow uses the official n8n LangChain integration (@n8n/n8n-nodes-langchain) for the chat trigger. If you are using a self-hosted version of n8n, please ensure this package is installed. 2. Google Custom Search API: A Google Cloud Project with the "Custom Search API" enabled. You will need an API Key for this service. You must also create a Programmable Search Engine and get its Search engine ID (cx). This tells Google what to search (e.g., the whole web). 3. Google Sheets Account: A Google account and a pre-made Google Sheet with columns for Business Name, Primary Email, Contact Number, URL, Description, Socials, and Search Query. Setup Instructions 1. Configure the Chat Trigger: In the "When chat message received" node, you can find the Direct URL or Embed code to use the chat interface. Set Up Google Custom Search API (Crucial Step): Go to the "Custom Google Search API" (HTTP Request) node. Under "Query Parameters", you must replace the placeholder values for key (with your API Key) and cx (with your Search Engine ID). 3. Configure Google Sheets: In all Google Sheets nodes (Append row in sheet, Get row(s) in sheet, etc.), connect your Google Sheets credentials. Select your target spreadsheet (Document ID) and the specific sheet (Sheet Name) where you want to store the leads. 4. Activate the Workflow: Save the workflow and toggle the "Active" switch to ON. Open the chat URL and enter a search query to start generating leads. Connect with us Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Workflow Automation Click here- Get Started
by Onur
🔍 Extract Competitor SERP Rankings from Google Search to Sheets with Scrape.do This template requires a self-hosted n8n instance to run. A complete n8n automation that extracts competitor data from Google search results for specific keywords and target countries using Scrape.do SERP API, and saves structured results into Google Sheets for SEO, competitive analysis, and market research. 📋 Overview This workflow provides a lightweight competitor analysis solution that identifies ranking websites for chosen keywords across different countries. Ideal for SEO specialists, content strategists, and digital marketers who need structured SERP insights without manual effort. Who is this for? SEO professionals tracking keyword competitors Digital marketers conducting market analysis Content strategists planning based on SERP insights Business analysts researching competitor positioning Agencies automating SEO reporting What problem does this workflow solve? Eliminates manual SERP scraping Processes multiple keywords across countries Extracts structured data (position, title, URL, description) Automates saving results into Google Sheets Ensures repeatable & consistent methodology ⚙️ What this workflow does Manual Trigger → Starts the workflow manually Get Keywords from Sheet → Reads keywords + target countries from a Google Sheet URL Encode Keywords → Converts keywords into URL-safe format Process Keywords in Batches → Handles multiple keywords sequentially to avoid rate limits Fetch Google Search Results → Calls Scrape.do SERP API to retrieve raw HTML of Google SERPs Extract Competitor Data from HTML → Parses HTML into structured competitor data (top 10 results) Append Results to Sheet → Writes structured SERP results into a Google Sheet 📊 Output Data Points | Field | Description | Example | |--------------------|------------------------------------------|-------------------------------------------| | Keyword | Original search term | digital marketing services | | Target Country | 2-letter ISO code of target region | US | | position | Ranking position in search results | 1 | | websiteTitle | Page title from SERP result | Digital Marketing Software & Tools | | websiteUrl | Extracted website URL | https://www.hubspot.com/marketing | | websiteDescription | Snippet/description from search results | Grow your business with HubSpot’s tools… | ⚙️ Setup Prerequisites n8n instance (self-hosted) Google account with Sheets access Scrape.do* account with *SERP API token** Google Sheet Structure This workflow uses one Google Sheet with two tabs: Input Tab: "Keywords" | Column | Type | Description | Example | |----------|------|-------------|---------| | Keyword | Text | Search query | digital marketing | | Target Country | Text | 2-letter ISO code | US | Output Tab: "Results" | Column | Type | Description | Example | |--------------------|-------|-------------|---------| | Keyword | Text | Original search term | digital marketing | | position | Number| SERP ranking | 1 | | websiteTitle | Text | Title of the page | Digital Marketing Software & Tools | | websiteUrl | URL | Website/page URL | https://www.hubspot.com/marketing | | websiteDescription | Text | Snippet text | Grow your business with HubSpot’s tools | 🛠 Step-by-Step Setup Import Workflow: Copy the JSON → n8n → Workflows → + Add → Import from JSON Configure **Scrape.do API**: Endpoint: https://api.scrape.do/ Parameter: token=YOUR_SCRAPEDO_TOKEN Add render=true for full HTML rendering Configure Google Sheets: Create a sheet with two tabs: Keywords (input), Results (output) Set up Google Sheets OAuth2 credentials in n8n Replace placeholders: YOUR_GOOGLE_SHEET_ID and YOUR_GOOGLE_SHEETS_CREDENTIAL_ID Run & Test: Add test data in Keywords tab Execute workflow → Check results in Results tab 🧰 How to Customize Add more fields**: Extend HTML parsing logic in the “Extract Competitor Data” node to capture extra data (e.g., domain, sitelinks). Filtering**: Exclude domains or results with custom rules. Batch Size**: Adjust “Process Keywords in Batches” for speed vs. rate-limits. Rate Limiting: Insert a **Wait node (e.g., 10–30 seconds) if API rate limits apply. Multi-Sheet Output**: Save per-country or per-keyword results into separate tabs. 📊 Use Cases SEO Competitor Analysis**: Identify top-ranking sites for target keywords Market Research**: See how SERPs differ by region Content Strategy**: Analyze titles & descriptions of competitor pages Agency Reporting**: Automate competitor SERP snapshots for clients 📈 Performance & Limits Single Keyword: ~10–20 seconds (depends on **Scrape.do response) Batch of 10**: 3–5 minutes typical Large Sets (50+)**: 20–40 minutes depending on API credits & batching API Calls: 1 **Scrape.do request per keyword Reliability**: 95%+ extraction success, 98%+ data accuracy 🧩 Troubleshooting API error** → Check YOUR_SCRAPEDO_TOKEN and API credits No keywords loaded** → Verify Google Sheet ID & tab name = Keywords Permission denied** → Re-authenticate Google Sheets OAuth2 in n8n Empty results** → Check parsing logic and verify search term validity Workflow stops early** → Ensure batching loop (SplitInBatches) is properly connected 🤝 Support & Community n8n Forum: https://community.n8n.io n8n Docs: https://docs.n8n.io Scrape.do Dashboard: https://dashboard.scrape.do 🎯 Final Notes This workflow provides a repeatable foundation for extracting competitor SERP rankings with Scrape.do and saving them to Google Sheets. You can extend it with filtering, richer parsing, or integration with reporting dashboards to create a fully automated SEO intelligence pipeline.
by IranServer.com
Automate IP geolocation and HTTP port scanning with Google Sheets trigger This n8n template automatically enriches IP addresses with geolocation data and performs HTTP port scanning when new IPs are added to a Google Sheets document. Perfect for network monitoring, security research, or maintaining an IP intelligence database. Who's it for Network administrators, security researchers, and IT professionals who need to: Track IP geolocation information automatically Monitor HTTP service availability across multiple ports Maintain centralized IP intelligence in spreadsheets Automate repetitive network reconnaissance tasks How it works The workflow triggers whenever a new row containing an IP address is added to your Google Sheet. It then: Fetches geolocation data using the ip-api.com service to get country, city, coordinates, ISP, and organization information Updates the spreadsheet with the geolocation details Scans common HTTP ports (80, 443, 8080, 8000, 3000) to check service availability Records port status back to the same spreadsheet row, showing which services are accessible The workflow handles both successful connections and various error conditions, providing a comprehensive view of each IP's network profile. Requirements Google Sheets API access** - for reading triggers and updating data Google Sheets document** with at least an "IP" column header How to set up Create a Google Sheet with columns: IP, Country, City, Lat, Lon, ISP, Org, Port_80, Port_443, Port_8000, Port_8080, Port_3000 Configure Google Sheets credentials in both the trigger and update nodes Update the document ID in the Google Sheets Trigger and both Update nodes to point to your spreadsheet Test the workflow by adding an IP address to your sheet and verifying the automation runs How to customize the workflow Modify port list**: Edit the "Edit Fields" node to scan different ports by changing the ports array Add more geolocation fields**: The ip-api.com response includes additional fields like timezone, zip code, and AS number Change trigger frequency**: Adjust the polling interval in the Google Sheets Trigger for faster or slower monitoring Add notifications**: Insert Slack, email, or webhook nodes to alert when specific conditions are detected Filter results**: Add IF nodes to process only certain IP ranges or geolocation criteria
by vinci-king-01
Daily Stock Regulatory News Aggregator with Compliance Alerts and Google Sheets Tracking 🎯 Target Audience Compliance officers and regulatory teams Financial services firms monitoring regulatory updates Investment advisors tracking regulatory changes Risk management professionals Corporate legal departments Stock traders and analysts monitoring regulatory news 🚀 Problem Statement Manually monitoring regulatory updates from multiple agencies (SEC, FINRA, ESMA) is time-consuming and error-prone. This template automates daily regulatory news monitoring, aggregates updates from major regulatory bodies, filters for recent announcements, and instantly alerts compliance teams to critical regulatory changes, enabling timely responses and maintaining regulatory compliance. 🔧 How it Works This workflow automatically monitors regulatory news daily, scrapes the latest updates from major regulatory agencies using AI-powered web scraping, filters for updates from the last 24 hours, and sends Slack alerts while logging all updates to Google Sheets for historical tracking. Key Components Daily Schedule Trigger - Automatically runs the workflow every 24 hours to check for regulatory updates Regulatory Sources Configuration - Defines the list of regulatory agencies and their URLs to monitor (SEC, FINRA, ESMA) Batch Processing - Iterates through regulatory sources one at a time for reliable processing AI-Powered Scraping - Uses ScrapeGraphAI to intelligently extract regulatory updates including title, summary, date, agency, and source URL Data Flattening - Transforms scraped data structure into individual update records Time Filtering - Filters updates to keep only those from the last 24 hours Historical Tracking - Logs all filtered updates to Google Sheets for compliance records Compliance Alerts - Sends Slack notifications to compliance teams when new regulatory updates are detected 💰 Key Features Automated Regulatory Monitoring Daily Execution**: Runs automatically every 24 hours without manual intervention Multi-Agency Support**: Monitors SEC, FINRA, and ESMA simultaneously Error Handling**: Gracefully handles scraping errors and continues processing other sources Smart Filtering Time-Based Filtering**: Automatically filters updates to show only those from the last 24 hours Date Validation**: Discards updates with unreadable or invalid dates Recent Updates Focus**: Ensures compliance teams only receive actionable, timely information Alert System Compliance Alerts**: Instant Slack notifications for new regulatory updates Structured Data**: Alerts include title, summary, date, agency, and source URL Dedicated Channel**: Posts to designated compliance alerts channel for team visibility 📊 Output Specifications The workflow generates and stores structured data including: | Output Type | Format | Description | Example | |-------------|--------|-------------|---------| | Regulatory Updates | JSON Object | Extracted regulatory update information | {"title": "SEC Announces New Rule", "date": "2024-01-15", "agency": "SEC"} | | Update History | Google Sheets | Historical regulatory update records with timestamps | Columns: Title, Summary, Date, Agency, Source URL, Scraped At | | Slack Alerts | Messages | Compliance notifications for new updates | "📢 New SEC update: [Title] - [Summary]" | | Error Logs | System Logs | Scraping error notifications | "❌ Error scraping FINRA updates" | 🛠️ Setup Instructions Estimated setup time: 15-20 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Google Sheets API access (OAuth2) Slack workspace with API access Google Sheets spreadsheet for regulatory update tracking Step-by-Step Configuration 1. Install Community Nodes Install ScrapeGraphAI community node npm install n8n-nodes-scrapegraphai 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Authorize access to your Google account Create or identify the spreadsheet for regulatory update tracking Note the spreadsheet ID and sheet name (default: "RegUpdates") 4. Configure Slack Integration Add Slack API credentials to your n8n instance Create or identify Slack channel: #compliance-alerts Test Slack connection with a sample message Ensure the bot has permission to post messages 5. Customize Regulatory Sources Open the "Regulatory Sources" Code node Update the urls array with additional regulatory sources if needed: const urls = [ 'https://www.sec.gov/news/pressreleases', 'https://www.finra.org/rules-guidance/notices', 'https://www.esma.europa.eu/press-news', // Add more URLs as needed ]; 6. Configure Google Sheets Update documentId in "Log to Google Sheets" node with your spreadsheet ID Update sheetName to match your sheet name (default: "RegUpdates") Ensure the sheet has columns: Title, Summary, Date, Agency, Source URL, Scraped At Create the sheet with proper column headers if starting fresh 7. Customize Slack Channel Open "Send Compliance Alert" Slack node Update the channel name (default: "#compliance-alerts") Customize the message format if needed Test with a sample message 8. Adjust Schedule Open "Daily Regulatory Poll" Schedule Trigger Modify hoursInterval to change frequency (default: 24 hours) Set specific times if needed for daily execution 9. Customize Scraping Prompt Open "Scrape Regulatory Updates" ScrapeGraphAI node Adjust the userPrompt to extract different or additional fields Modify the JSON schema in the prompt if needed Change the number of updates extracted (default: 5 most recent) 10. Test and Validate Run the workflow manually to verify all connections Check Google Sheets for data structure and format Verify Slack alerts are working correctly Test error handling with invalid URLs Validate date filtering is working properly 🔄 Workflow Customization Options Modify Monitoring Frequency Change hoursInterval in Schedule Trigger for different frequencies Switch to multiple times per day for critical monitoring Add multiple schedule triggers for different agency checks Extend Data Collection Modify ScrapeGraphAI prompt to extract additional fields (documents, categories, impact level) Add data enrichment nodes for risk assessment Integrate with regulatory databases for more comprehensive tracking Add sentiment analysis for regulatory updates Enhance Alert System Add email notifications alongside Slack alerts Create different alert channels for different agencies Add priority-based alerting based on update keywords Integrate with SMS or push notification services Add webhook integrations for other compliance tools Advanced Analytics Add data visualization nodes for regulatory trend analysis Create automated compliance reports with summaries Integrate with business intelligence tools Add machine learning for update categorization Track regulatory themes and topics over time Multi-Source Support Add support for additional regulatory agencies Implement agency-specific scraping strategies Add regional regulatory sources (FCA, BaFin, etc.) Include state-level regulatory updates 📈 Use Cases Compliance Monitoring**: Automatically track regulatory updates to ensure timely compliance responses Risk Management**: Monitor regulatory changes that may impact business operations or investments Regulatory Intelligence**: Build historical databases of regulatory announcements for trend analysis Client Communication**: Stay informed to provide timely updates to clients about regulatory changes Legal Research**: Track regulatory developments for legal research and case preparation Investment Strategy**: Monitor regulatory changes that may affect investment decisions 🚨 Important Notes Respect website terms of service and rate limits when scraping regulatory sites Monitor ScrapeGraphAI API usage to manage costs Ensure Google Sheets has proper column structure before first run Set up Slack channel before running the workflow Consider implementing rate limiting for multiple regulatory sources Keep credentials secure and rotate them regularly Test with one regulatory source first before adding multiple sources Verify date formats are consistent across different regulatory agencies Be aware that some regulatory sites may have anti-scraping measures 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Google Sheets logging failures: Check spreadsheet ID, sheet name, and column structure Slack notification failures: Verify channel name exists and bot has permissions Date filtering issues: Ensure dates from scraped content are in a parseable format Validation errors: Check that scraped data matches expected schema Empty results: Verify regulatory sites are accessible and haven't changed structure Optimization Tips: Start with one regulatory source to test the workflow Monitor API usage and costs regularly Use batch processing to avoid overwhelming scraping services Implement retry logic for failed scraping attempts Consider caching mechanisms for frequently checked sources Adjust the number of updates extracted based on typical volume Support Resources: ScrapeGraphAI documentation and API reference Google Sheets API documentation Slack API documentation for webhooks n8n community forums for workflow assistance n8n documentation for node configuration SEC, FINRA, and ESMA official websites for source verification