by Davide
This workflow automates the process of collecting, analyzing, and storing Facebook post comments with AI-powered sentiment analysis about YOUR Facebook Page. Typical Use Cases: Social media sentiment monitoring Brand reputation analysis Campaign performance evaluation Community management and moderation insights Reporting and analytics for marketing teams Key Advantages ✅ 1. Full Automation Eliminates manual work by automatically collecting and analyzing Facebook comments end-to-end. ✅ 2. AI-Powered Sentiment Analysis Uses Google Gemini to accurately classify user sentiment, enabling deeper insights into audience perception and engagement. ✅ 3. Structured Data Storage Saves results directly into Google Sheets, making the data easy to analyze, share, and visualize with dashboards or reports. ✅ 4. Duplicate-Safe Updates The “append or update” logic ensures comments are not duplicated and can be refreshed if sentiment analysis changes. ✅ 5. Scalable and Robust Pagination handling, batch processing, and wait nodes allow the workflow to scale to large volumes of comments without hitting API limits. ✅ 6. Modular Architecture The use of sub-workflows makes the solution reusable and easy to integrate into larger automation pipelines (e.g. monitoring multiple posts or pages). ✅ 7. Flexible Triggering Can be run manually for testing or automatically as part of a broader workflow ecosystem. How it works This workflow automates the process of fetching Facebook post comments, performing sentiment analysis on each comment, and storing the results in a Google Sheet. It operates in two modes: Manual execution mode: Starts with a Manual Trigger, where the user enters a Facebook Post ID. The workflow fetches the post details, then retrieves all comments (including pagination). It calls a separate "Facebook" workflow (via the Call 'Facebook' node) to process each comment batch through sentiment analysis and save results to Google Sheets. Triggered execution mode: Activated via the "When Executed by Another Workflow" trigger, receiving comment data directly. It splits and batches the incoming comments, processes each through the sentiment analysis model (Google Gemini), and appends/updates records in Google Sheets. Set up steps Configure Facebook Graph API credentials: Add your Facebook Graph API credentials to both "Get Fb Post" and "Get Fb comments" nodes. Set up Google Gemini API credentials: Configure the "Google Gemini Chat Model" node with valid Google PaLM/Gemini API credentials. Prepare Google Sheet: Ensure the Google Sheet exists and is accessible via the Google Sheets OAuth2 credentials. The sheet should have (or will automatically create) columns: POST ID, COMMENT ID, COMMENT, SENTIMENT. Configure the sub-workflow call: Ensure the "Call 'Facebook'" node points to a valid, existing workflow that can process comment data. Optional: Adjust batch and wait settings: Modify the "Loop Over Items" node if different batch sizes are needed. Adjust the "Wait" node delay if required to avoid rate limits. Activate the workflow: Toggle the workflow to active if scheduled or webhook execution is desired. Test using the Manual Trigger with a sample Facebook Post ID. 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by WeblineIndia
(Retail) Auto-Tag High-Risk SKUs This workflow automatically monitors product sales in your WooCommerce store, detects fast-selling items, applies risk tags and sends a clear alert to Slack—so you never miss products that need attention. This workflow checks your WooCommerce store every day, reviews product sales from the last 14 days and calculates how fast each product is selling. Based on sales volume, it assigns a risk level (OK, Watchlist, High-Risk or Critical), updates product tags in WooCommerce and sends a single, easy-to-read Slack alert for products that need attention. You receive: Daily automated sales analysis** Automatic risk tagging inside WooCommerce** One clean Slack alert with product name, units sold and risk level** Ideal for store owners and operations teams who want proactive inventory control without manual reports. Quick Start – Implementation Steps Connect your WooCommerce API credentials. Connect your Slack workspace and choose an alert channel. Adjust sales thresholds if needed (optional). Activate the workflow — daily monitoring starts automatically. What It Does This workflow automates inventory risk detection: Runs automatically on a daily schedule. Fetches completed WooCommerce orders from the last 14 days. Fetches product details from WooCommerce. Counts how many units of each product were sold. Assigns a risk level: OK Watchlist High-Risk Critical Updates product tags in WooCommerce based on risk. Combines all risky products into one list. Sends a single Slack alert summarizing: Product name Units sold Risk level This prevents stock issues and highlights fast-selling products early. Who’s It For This workflow is ideal for: WooCommerce store owners E-commerce operations teams Inventory & supply chain managers Marketing teams tracking fast-selling products Businesses managing limited or high-demand stock Anyone who wants automated inventory visibility Requirements to Use This Workflow To run this workflow, you need: n8n instance** (cloud or self-hosted) WooCommerce store** with REST API access WooCommerce API keys** (Read + Write) Slack workspace** with API access Basic understanding of WooCommerce products & orders How It Works Daily Trigger – Workflow runs at a scheduled time. Fetch Orders – Gets completed orders from the last 14 days. Fetch Products – Retrieves product details. Calculate Sales & Risk – Counts sold units and assigns risk level. Split by Risk – Routes products based on risk category. Update Product Tags – Applies correct WooCommerce tags. Merge Results – Combines all risky products. Build Alert Message – Creates a readable Slack message. Send Slack Alert – Sends one summary alert to your team. Setup Steps Import the workflow JSON into n8n. Configure WooCommerce credentials in all WooCommerce nodes. Ensure risk tags exist in WooCommerce: Watchlist High-Risk Critical Connect your Slack API credentials. Select the Slack channel for alerts. Review or adjust sales thresholds in the risk calculation node. Activate the workflow. How To Customize Nodes Customize Risk Thresholds Update the Calculate Risk code node to change when products move into: Watchlist High-Risk Critical Customize WooCommerce Tags Replace tag IDs in the Update Product nodes with your own tag IDs. Customize Slack Alerts You can add: Emojis Mentions (@channel, @team) Product links Stock status or category info Add-Ons (Optional Enhancements) You can extend this workflow to: Include stock quantity checks Send separate alerts per risk level Create weekly or monthly summaries Store alerts in Google Sheets or Airtable Add email or SMS notifications Predict out-of-stock dates Add AI-based sales trend insights Use Case Examples 1\. Inventory Risk Monitoring Detect products that may go out of stock soon. 2\. Sales Trend Tracking Identify fast-selling products automatically. 3\. Operations Alerts Notify teams before stock issues occur. 4\. Marketing Signals Spot trending products for promotions. 5\. Daily Store Health Check Get a quick snapshot of product risk every day. Troubleshooting Guide IssuePossible CauseSolutionNo Slack alertNo risky productsCheck thresholdsTags not updatedWrong tag IDVerify WooCommerce tag IDsUnits sold = 0Orders not completedCheck order status filterWorkflow not runningSchedule disabledEnable Schedule TriggerSlack errorInvalid credentialsReconnect Slack account Need Help? If you need help customizing, scaling or extending this workflow—such as adding forecasting, dashboards or multi-store support—the WeblineIndia team can help you build production-ready e-commerce automation.
by Kendra McClanahan
Champion Migration Tracker Automatically detect when your champion contacts change companies and respond with intelligent, personalized AI outreach before your competitors do. THE PROBLEM When champions move to new companies, sales teams lose track and miss high-value opportunities. Manual LinkedIn monitoring doesn't scale, and by the time you notice, the relationship has gone cold. THE SOLUTION This workflow automates champion migration tracking end-to-end, combining Explorium's data intelligence with Claude AI agents to maintain relationships and prioritize opportunities. HOW IT WORKS 1. Automated Job Change Detection Uses Explorium person enrichment to detect when champions move companies Eliminates manual LinkedIn monitoring Triggers immediately when employment changes 2. Intelligent Company Enrichment Enriches new companies with Explorium data: firmographics, funding, tech stack, hiring velocity Checks if company already exists in your CRM (Customer vs Prospect) Identifies open opportunities and account owners 3. Multi-Dimensional Opportunity Scoring (0-100) ICP Fit (40%)**: Company size, funding stage, revenue, tech stack alignment Relationship Strength (40%)**: Past deals influenced, relationship warmth, CRM status Timing (20%)**: Days at new company, recent funding/acquisition signals Results in Hot/Warm/Cold priority classification 4. Smart Routing by Context Customers**: Notify account manager with congratulations message Hot Prospects (75+ score)**: Draft detailed strategic outreach for rep review 5. AI-Powered Personalization Claude AI agents generate contextually relevant emails References past relationship, deals influenced, and company intelligence Adapts tone and content based on opportunity priority and CRM status DEMO SETUP (Google Sheets) This demo uses Google Sheets for simplicity. For production use, replace with your actual CRM: Salesforce HubSpot Pipedrive Any CRM with n8n integration Important Fields to Consider: Champions: champion_id, name, email, company, title, last_checked_date relationship_strength (Hot/Warm/Cold), last_contact_date, deals_influenced relationship_notes, isChampion (TRUE/FALSE), linkedin_url, explorium_prospect_id Companies: company_ID, companyName, domain, relationship_type (Customer/Prospect/None) open_opportunity (TRUE/FALSE), opportunity_stage, account_owner, account_owner_email contractValue, notes, ExploriumBusinessID REQUIRED CREDENTIALS Anthropic API Key - Powers Claude AI agents for email generation Explorium API Key - Provides person and company enrichment data Google Sheets or Your CRM (production) - Data source and logging SETUP INSTRUCTIONS Connect Credentials in n8n Settings → Credentials Update Data Sources: Replace Google Sheets nodes with your CRM nodes (or create demo sheets with structure above) Configure Scoring: Adjust ICP scoring criteria in "Score Company" node to match your ideal customer profile Test with Sample Data: Run with 2-3 test champions to verify routing and email generation Schedule Trigger: Set to run daily or weekly based on your needs CUSTOMIZATION TIPS Scoring Weights: Adjust the 40/40/20 weighting in the scoring node to prioritize what matters most to your business Tech Stack Matching: Update the relevantTech array with tools your champions likely use Email Tone: Modify Claude prompts to match your brand voice (formal, casual, technical, etc.) Routing Logic: Add additional branches for specific scenarios (e.g., churned customers, enterprise accounts) **Agentic Experience: Consider adding an agent that sends the email for Cold prospects automatically. Integrations: Add Slack notifications, CRM updates, or calendar booking links to the output BUSINESS VALUE Prevent Revenue Leakage**: Never lose track of champion relationships Prioritize Intelligently**: Focus on opportunities with highest potential Scale Relationship Building**: Automate what used to require manual effort Act Before Competitors**: Reach out while champions are still settling into new roles Data-Driven Decisions**: Quantifiable scores replace gut feelings USE CASES Sales Teams**: Re-engage champions at new prospect companies Customer Success**: Track champions who move to existing accounts Account-Based Marketing**: Identify high-fit accounts through champion networks Revenue Operations**: Automate champion tracking at scale NOTES Production Recommendation**: Replace Google Sheets with your production CRM for real-time data Privacy**: All API keys are credential-referenced (not hardcoded) for security Explorium Credits**: Person + company enrichment uses ~2-3 credits per champion
by Pinecone
Vacation rental property manager with multiple Assistants 🛠️ Read about how multi-domain RAG works and other use cases by working through this tutorial on the n8n Blog here. This workflow shows you how a vacation rental property manager can manage multiple properties each with different information using Pinecone Assistant. Guests can ask questions about their property and get a personalized answer back. What is Pinecone Assistant? Pinecone Assistant allows you to build production-grade chat and agent-based applications quickly. It abstracts the complexities of implementing retrieval-augmented (RAG) systems by managing the chunking, embedding, storage, query planning, vector search, model orchestration, reranking for you. Try it out Prerequisites A Pinecone account A GCP project with Google Drive API enabled and configured An Open AI account and API key Setup Create three Pinecone Assistants in the Pinecone Console here Name your Assistants n8n-vacation-rental-property-lakeside, n8n-vacation-rental-property-birchwood, n8n-vacation-rental-property-hillcrest Use the Connect to Pinecone button to authenticate to Pinecone or if you self-host n8n, create a Pinecone credential and add your Pinecone API key directly Setup your Google Drive OAuth2 API and OpenAI credentials in n8n Select your Assistant Name in each of the respective Pinecone Assistant nodes Ask Claude or ChatGPT to generate fictional data in markdown file(s) for each property. You'll use this data in the next step. Using this prompt: Generate fictional data in markdown format in multiple files for three fictional vacation rental properties in a fictional city. The rental property names are: Hillcrest Haven** – cozy hillside cottage Birchwood Retreat** – wooded cabin Lakeside Loft** – modern loft near the water Include house manual and rules, wifi codes, local restaurant, coffee shop, outdoor recreation, and entertainment recommendations, and fictional appliance manuals for the air fryer, coffee pot, tv, and washer and dryer. All addresses, cities, names, phone numbers should be fictional. Each set of files should be named based on their property name like "hillcrest_haven_house_manual.md". Note: If you don't want to generate your own files, you can use these files. Add the files to three separate Drive folders named lakeside, birchwood, and hillcrest Activate the workflow to upload the documents to Pinecone Once the data is uploaded, ask questions in the chat about a property: I need help with the coffee maker The air fryer isn't working at the Lakeside property Ideas for customizing this workflow This workflow uses one Assistant per property. You could also use one Assistant and separate the data by setting a metadata field, property, to the name of the property the file is for. Use your own data and customize to your use case with multiple store locations, restaurants, teams, etc. Need help? You can find help by asking in the Pinecone Discord community or filing an issue on this repo.
by vinci-king-01
Job Posting Aggregator with Email and GitHub ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically aggregates certification-related job-posting requirements from multiple industry sources, compares them against last year’s data stored in GitHub, and emails a concise change log to subscribed professionals. It streamlines annual requirement checks and renewal reminders, ensuring users never miss an update. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n cloud) ScrapeGraphAI community node installed Git installed (for optional local testing of the repo) Working SMTP server or other Email credential supported by n8n Required Credentials ScrapeGraphAI API Key** – Enables web scraping of certification pages GitHub Personal Access Token** – Allows the workflow to read/write files in the repo Email / SMTP Credentials** – Sends the summary email to end-users Specific Setup Requirements | Resource | Purpose | Example | |----------|---------|---------| | GitHub Repository | Stores certification_requirements.json versioned annually | https://github.com/<you>/cert-requirements.git | | Watch List File | List of page URLs & selectors to scrape | Saved in the repo under /config/watchList.json | | Email List | Semicolon-separated list of recipients | me@company.com;team@company.com | How it works This workflow automatically aggregates certification-related job-posting requirements from multiple industry sources, compares them against last year’s data stored in GitHub, and emails a concise change log to subscribed professionals. It streamlines annual requirement checks and renewal reminders, ensuring users never miss an update. Key Steps: Manual Trigger**: Starts the workflow on demand or via scheduled cron. Load Watch List (Code Node)**: Reads the list of certification URLs and CSS selectors. Split In Batches**: Iterates through each URL to avoid rate limits. ScrapeGraphAI**: Scrapes requirement details from each page. Merge (Wait)**: Reassembles individual scrape results into a single JSON array. GitHub (Read File)**: Retrieves last year’s certification_requirements.json. IF (Change Detector)**: Compares current vs. previous JSON and decides whether changes exist. Email Send**: Composes and sends a formatted summary of changes. GitHub (Upsert File)**: Commits the new JSON file back to the repo for future comparisons. Set up steps Setup Time: 15-25 minutes Install Community Node: From n8n UI → Settings → Community Nodes → search and install “ScrapeGraphAI”. Create/Clone GitHub Repo: Add an empty certification_requirements.json ( {} ) and a config/watchList.json with an array of objects like: [ { "url": "https://cert-body.org/requirements", "selector": "#requirements" } ] Generate GitHub PAT: Scope repo, store in n8n Credentials as “GitHub API”. Add ScrapeGraphAI Credential: Paste your API key into n8n Credentials. Configure Email Credentials: E.g., SMTP with username/password or OAuth2. Open Workflow: Import the template JSON into n8n. Update Environment Variables (in the Code node or via n8n variables): GITHUB_REPO (e.g., user/cert-requirements) EMAIL_RECIPIENTS Test Run: Trigger manually. Verify email content and GitHub commit. Schedule: Add a Cron node (optional) for yearly or quarterly automatic runs. Node Descriptions Core Workflow Nodes: Manual Trigger** – Initiates the workflow manually or via external schedule. Code (Load Watch List)** – Reads and parses watchList.json from GitHub or static input. SplitInBatches** – Controls request concurrency to avoid scraping bans. ScrapeGraphAI** – Extracts requirement text using provided CSS selectors or XPath. Merge (Combine)** – Waits for all batches and merges them into one dataset. GitHub (Read/Write File)** – Handles version-controlled storage of JSON data. IF (Change Detector)** – Compares hashes/JSON diff to detect updates. EmailSend** – Sends change log, including renewal reminders and diff summary. Sticky Note** – Provides in-workflow documentation for future editors. Data Flow: Manual Trigger → Code (Load Watch List) → SplitInBatches SplitInBatches → ScrapeGraphAI → Merge Merge → GitHub (Read File) → IF (Change Detector) IF (True) → Email Send → GitHub (Upsert File) Customization Examples Adjusting Scraper Configuration // Inside the Watch List JSON object { "url": "https://new-association.com/cert-update", "selector": ".content article:nth-of-type(1) ul" } Custom Email Template // In Email Send node → HTML Content 📋 Certification Updates – {{ $json.date }} The following certifications have new requirements: {{ $json.diffHtml }} For full details visit our GitHub repo. Data Output Format The workflow outputs structured JSON data: { "timestamp": "2024-09-01T12:00:00Z", "source": "watchList.json", "current": { "AWS-SAA": "Version 3.0, requires renewed proctored exam", "PMP": "60 PDUs every 3 years" }, "previous": { "AWS-SAA": "Version 2.0", "PMP": "60 PDUs every 3 years" }, "changes": { "AWS-SAA": "Updated to Version 3.0; exam format changed." } } Troubleshooting Common Issues ScrapeGraphAI returns empty data – Check CSS/XPath selectors and ensure page is publicly accessible. GitHub authentication fails – Verify PAT scope includes repo and that the credential is linked in both GitHub nodes. Performance Tips Limit SplitInBatches size to 3-5 URLs when sources are heavy to avoid timeouts. Enable n8n execution mode “Queue” for long-running scrapes. Pro Tips: Store selector samples in comments next to each watch list entry for future maintenance. Use a Cron node set to “0 0 1 1 *” for an annual run exactly on Jan 1st. Add a Telegram node after Email Send for instant mobile notifications.
by vinci-king-01
Certification Requirement Tracker with Rocket.Chat and GitLab ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically monitors websites of certification bodies and industry associations, detects changes in certification requirements, commits the updated information to a GitLab repository, and notifies a Rocket.Chat channel. Ideal for professionals and compliance teams who must stay ahead of annual updates and renewal deadlines. Pre-conditions/Requirements Prerequisites Running n8n instance (self-hosted or n8n.cloud) ScrapeGraphAI community node installed and active Rocket.Chat workspace (self-hosted or cloud) GitLab account and repository for documentation Publicly reachable URL for incoming webhooks (use n8n tunnel, Ngrok, or a reverse proxy) Required Credentials ScrapeGraphAI API Key** – Enables scraping of certification pages Rocket.Chat Access Token & Server URL** – To post update messages GitLab Personal Access Token** – With api and write_repository scopes Specific Setup Requirements | Item | Example Value | Notes | | ------------------------------ | ------------------------------------------ | ----- | | GitLab Repo | gitlab.com/company/cert-tracker | Markdown files will be committed here | | Rocket.Chat Channel | #certification-updates | Receives update alerts | | Certification Source URLs file | /data/sourceList.json in the repository | List of URLs to scrape | How it works This workflow automatically monitors websites of certification bodies and industry associations, detects changes in certification requirements, commits the updated information to a GitLab repository, and notifies a Rocket.Chat channel. Ideal for professionals and compliance teams who must stay ahead of annual updates and renewal deadlines. Key Steps: Webhook Trigger**: Fires on a scheduled HTTP call (e.g., via cron) or manual trigger. Code (Prepare Source List)**: Reads/constructs a list of certification URLs to scrape. ScrapeGraphAI**: Fetches HTML content and extracts requirement sections. Merge**: Combines newly scraped data with the last committed snapshot. IF Node**: Determines if a change occurred (hash/length comparison). GitLab**: Creates a branch, commits updated Markdown/JSON files, and opens an MR (optional). Rocket.Chat**: Posts a message summarizing changes and linking to the GitLab diff. Respond to Webhook**: Returns a JSON summary to the requester (useful for monitoring or chained automations). Set up steps Setup Time: 20-30 minutes Install Community Node: In n8n UI, go to Settings → Community Nodes and install @n8n/community-node-scrapegraphai. Create Credentials: a. ScrapeGraphAI – paste your API key. b. Rocket.Chat – create a personal access token (Personal Access Tokens → New Token) and configure credentials. c. GitLab – create PAT with api + write_repository scopes and add to n8n. Clone the Template: Import this workflow JSON into your n8n instance. Edit StickyNote: Replace placeholder URLs with actual certification-source URLs or point to a repo file. Configure GitLab Node: Set your repository, default branch, and commit message template. Configure Rocket.Chat Node: Select credential, channel, and message template (markdown supported). Expose Webhook: If self-hosting, enable n8n tunnel or configure reverse proxy to make the webhook public. Test Run: Trigger the workflow manually; verify GitLab commit/MR and Rocket.Chat notification. Automate: Schedule an external cron (or n8n Cron node) to POST to the webhook yearly, quarterly, or monthly as needed. Node Descriptions Core Workflow Nodes: stickyNote** – Human-readable instructions/documentation embedded in the flow. webhook** – Entry point; accepts POST /cert-tracker requests. code (Prepare Source List)** – Generates an array of URLs; can pull from GitLab or an environment variable. scrapegraphAi** – Scrapes each URL and extracts certification requirement sections using CSS/XPath selectors. merge (by key)** – Joins new data with previous snapshot for change detection. if (Changes?)** – Branches logic based on whether differences exist. gitlab** – Creates/updates files and opens merge requests containing new requirements. rocketchat** – Sends formatted update to designated channel. respondToWebhook** – Returns 200 OK with a JSON summary. Data Flow: webhook → code → scrapegraphAi → merge → if if (true) → gitlab → rocketchat if (false) → respondToWebhook Customization Examples Change Scraping Frequency // Replace external cron with n8n Cron node { "nodes": [ { "name": "Cron", "type": "n8n-nodes-base.cron", "parameters": { "schedule": { "hour": "0", "minute": "0", "dayOfMonth": "1" } } } ] } Extend Notification Message // Rocket.Chat node → Message field const diffUrl = $json["gitlab_diff_url"]; const count = $json["changes_count"]; return :bell: ${count} Certification Requirement Update(s)\n\nView diff: ${diffUrl}; Data Output Format The workflow outputs structured JSON data: { "timestamp": "2024-05-15T12:00:00Z", "changesDetected": true, "changesCount": 3, "gitlab_commit_sha": "a1b2c3d4", "gitlab_diff_url": "https://gitlab.com/company/cert-tracker/-/merge_requests/42", "notifiedChannel": "#certification-updates" } Troubleshooting Common Issues ScrapeGraphAI returns empty results – Verify your CSS/XPath selectors and API key quota. GitLab commit fails (401 Unauthorized) – Ensure PAT has api and write_repository scopes and is not expired. Performance Tips Limit the number of pages scraped per run to avoid API rate limits. Cache last-scraped HTML in an S3 bucket or database to reduce redundant requests. Pro Tips: Use GitLab CI to auto-deploy documentation site whenever new certification files are merged. Enable Rocket.Chat threading to keep discussions organized per update. Tag stakeholders in Rocket.Chat messages with @cert-team for instant visibility.
by Samir Saci
Tags: Image Compression, Tinify API, TinyPNG, SEO Optimisation, E-commerce, Marketing Context Hi! I’m Samir Saci, Supply Chain Engineer, Data Scientist based in Paris, and founder of LogiGreen. I built this workflow for an agency specialising in e-commerce to automate the daily compression of their images stored in a Google Drive folder. This is particularly useful when managing large libraries of product photos, website assets or marketing visuals that need to stay lightweight for SEO, website performance or storage optimisation. > Test this workflow with the free tier of the API! 📬 For business inquiries, you can find me on LinkedIn Who is this template for? This template is designed for: E-commerce managers** who need to keep product images optimised Marketing teams** handling large volumes of visuals Website owners** wanting automatic image compression for SEO Anyone using Google Drive** to store images that gradually become too heavy What does this workflow do? This workflow acts as an automated image compressor and reporting system using Tinify, Google Drive, and Gmail. Runs every day at 08:00 using a Schedule Trigger Fetches all images from the Google Drive Input folder Downloads each file and sends it to the Tinify API for compression Downloads the optimised image and saves it to the Compressed folder Moves the original file to the Original Images archive Logs: fileName, originalSize, compressedSize, imageId, outputUrl and processingId into a Data Table After processing, it retrieves all logs for the current batch Generates a clean HTML report summarising the compression results Sends the report via Gmail, including total space saved Here is an example from my personal folder: Here is the report generated for these images: P.S.: You can customise the report to match your company branding or visual identity. 🎥 Tutorial A complete tutorial (with explanations of every node) is available on YouTube: Next Steps Before running the workflow, follow the sticky notes and configure the following: Get your Tinify API key for the free tier here: Get your key Replace Google Drive folder IDs in: Input, Compressed, and Original Images Replace the Data Table reference with your own (fields required: fileName, originalSize, compressedSize, imageId, outputUrl, processingId) Add your Tinify API key in the HTTP Basic Auth credentials Set up your Gmail credentials and recipient email (Optional) Customise the HTML report in the Generate Report Code node (Optional) Adjust the daily schedule to your preferred time Submitted: 18 November 2025 Template designed with n8n version 1.116.2
by Omer Fayyaz
This n8n template implements a Calendly Booking Link Generator that creates single-use, personalized booking links, logs them to Google Sheets, and optionally notifies a Slack channel Who's it for This template is designed for teams and businesses that send Calendly links proactively and want to generate trackable, single-use booking links on demand. It’s perfect for: Sales and SDR teams** sending 1:1 outreach and needing unique booking links per prospect Customer success and support teams** who want prefilled, one-click rescheduling or follow-up links Marketing and growth teams** that want UTM-tagged booking links for campaigns Ops/RevOps** who need a central log of every generated link for tracking and reporting How it works / What it does This workflow turns a simple HTTP request into a fully configured single-use Calendly booking link: Webhook Trigger (POST) Receives JSON payload with recipient details: name, email, optional event_type_uri, optional utm_source Configuration & Input Normalization Set Configuration extracts and normalizes: recipient_name, recipient_email requested_event_type (can be empty) utm_source (defaults to "n8n" if not provided) Calendly API – User & Event Types Get Current User calls GET /users/me using Calendly OAuth2 to get the current user URI Extract User stores user_uri and user_name Get Event Types calls GET /event_types?user={user_uri}&active=true to fetch active event types Select Event Type: Uses requested_event_type if provided, otherwise selects the first active event type Stores event type URI, name, and duration (minutes) Create Calendly Single-Use Scheduling Link Create Single-Use Link calls POST /scheduling_links with: owner: selected event type URI owner_type: "EventType" max_event_count: 1 (single use) Build Personalized Booking URL Build Personalized Link: Reads the base booking_url from Calendly Appends query parameters to prefill: name (encoded) email (encoded) utm_source Stores: base_booking_url personalized_booking_url recipient_name, recipient_email event_type_name, event_duration link_created_at (ISO timestamp) Optional Logging and Notifications Log to Google Sheets (optional but preconfigured): Appends each generated link to a “Generated Links” sheet Columns: Recipient Name, Recipient Email, Event Type, Duration (min), Booking URL, Created At, Status Notify via Slack (optional): Posts a nicely formatted Slack message with: recipient name & email event name & duration clickable booking link API Response to Caller Respond to Webhook returns a structured JSON response: success booking_url (personalized) base_url recipient object event object (name + duration) created_at expires explanation ("Single-use or 90 days") The result is an API-style service you can call from any system to generate trackable, single-use Calendly links. How to set up 1. Calendly OAuth2 setup Go to calendly.com/integrations or developer.calendly.com Create an OAuth2 application (or use an existing one) In n8n, create Calendly OAuth2 credentials: Add client ID, client secret, and redirect URL as required by Calendly Connect your Calendly user account In the workflow, make sure all Calendly HTTP Request nodes use your Calendly OAuth2 credential 2. Webhook Trigger configuration Open the Webhook Trigger node Confirm: HTTP Method: POST Path: generate-calendly-link Response Mode: Response Node (points to Respond to Webhook) Copy the Production URL from the node once the workflow is active Use this URL as the endpoint for your CRM, outbound tool, or any system that needs to request links Expected request body: { "name": "John Doe", "email": "john@example.com", "event_type_uri": "optional", "utm_source": "optional" } If event_type_uri is not provided, the workflow automatically uses the first active event type for the current Calendly user. 3. Google Sheets setup (optional but recommended) Create a Google Sheet for tracking links Add a sheet/tab named e.g. “Generated Links” Set the header row to: Recipient Name, Recipient Email, Event Type, Duration (min), Booking URL, Created At, Status In n8n: Create Google Sheets OAuth2 credentials Open the Log to Google Sheets node Update: documentId → your spreadsheet ID sheetName → your tab name (e.g. “Generated Links”) 4. Slack notification setup (optional) Create a Slack app at api.slack.com Add Bot Token scopes (for basic posting): chat:write channels:read (or groups:read if posting to private channels) Install the app to your workspace and get the Bot User OAuth Token In n8n: Create a Slack API credential using the bot token Open the Notify via Slack node Select your credential Set: select: channel channelId: your desired channel (e.g. #sales or #booking-links) 5. Test the workflow end-to-end Activate the workflow Use Postman, curl, or another system to POST to the webhook URL, e.g.: { "name": "Test User", "email": "test@example.com" } Verify: The HTTP response contains a valid booking_url A new row is added to your Google Sheet (if configured) A Slack notification is posted (if configured) Requirements Calendly account* with at least one *active event type** n8n instance** (cloud or self-hosted) with public access for the webhook Calendly OAuth2 credentials** configured in n8n (Optional) Google Sheets account and OAuth2 credentials (Optional) Slack workspace with permissions to install a bot and post to channels How to customize the workflow Input & validation Update the Set Configuration node to: Enforce required fields (e.g. fail if email is missing) Add more optional parameters (e.g. utm_campaign, utm_medium, language) Add an IF node after the Webhook Trigger for stricter validation and custom error responses Event type selection logic In Select Event Type: Change the fallback selection rule (e.g. pick the longest or shortest duration event) Add logic to map a custom field (like event_key) to specific event type URIs Link parameters & tracking In Build Personalized Link: Add additional query parameters (e.g. utm_campaign, source, segment) Remove or rename existing parameters if needed If you don’t want prefilled name/email, remove those query parameters and just keep tracking fields Google Sheets logging Extend the Log to Google Sheets mapping to include: utm_source or other marketing attributes Sales owner, campaign name, or pipeline stage Any additional fields you compute in previous nodes Slack notification formatting In Notify via Slack: Adjust the message text to your team’s tone Add emojis or @mentions for certain event types Include utm_source or other metadata for debugging and tracking Key features Single-use Calendly links** – each generated link is limited to one booking (or expires after ~90 days) Prefilled recipient details** – name and email are embedded in the URL, making it frictionless to book Webhook-first design** – easily call this from CRMs, outreach tools, or any external system Central link logging** – every link is stored in Google Sheets for auditing and reporting Optional Slack alerts** – keep sales/support teams notified when new links are generated Safe error handling** – HTTP nodes are configured with continueRegularOutput to avoid hard workflow failures Example scenarios Scenario 1: Sales outreach A CRM workflow triggers when a lead moves to “Meeting Requested”. It calls this n8n webhook with the lead’s name and email. The workflow generates a single-use Calendly link, logs it to Sheets, and posts to Slack. The CRM sends an email to the lead with the personalized booking link. Scenario 2: Automated follow-up link A support ticket is resolved and the system wants to offer a follow-up call. It calls the webhook with name, email, and a dedicated event_type_uri for “Follow-up Call”. The generated link is logged and returned via API, then included in an automated email. Scenario 3: Campaign tracking A marketing automation tool triggers this webhook for each contact in a campaign, passing utm_source (e.g. q1-outbound). The workflow adds utm_source to the link and logs it in Google Sheets. Later, you can analyze which campaigns generated the most completed bookings from single-use links. This template gives you a reliable, reusable Calendly link generation service that plugs into any part of your stack, while keeping tracking, logging, and team visibility fully automated.
by Jitesh Dugar
Revolutionize university admissions with intelligent AI-driven application evaluation that analyzes student profiles, calculates eligibility scores, and automatically routes decisions - saving 2.5 hours per application and reducing decision time from weeks to hours. 🎯 What This Workflow Does Transforms your admissions process from manual application review to intelligent automation: 📝 Captures Applications - Jotform intake with student info, GPA, test scores, essay, extracurriculars 🤖 AI Holistic Evaluation - OpenAI analyzes academic strength, essay quality, extracurriculars, and fit 🎯 Intelligent Scoring - Evaluates students using 40% academics, 25% extracurriculars, 20% essay, 15% fit (0-100 scale) 🚦 Smart Routing - Automatically routes based on AI evaluation: Auto-Accept (95-100)**: Acceptance letter with scholarship details → Admin alert → Database Interview Required (70-94)**: Interview invitation with scheduling link → Admin alert → Database Reject (<70)**: Respectful rejection with improvement suggestions → Database 💰 Scholarship Automation - Calculates merit scholarships ($5k-$20k+) based on eligibility score 📊 Analytics Tracking - All applications logged to Google Sheets for admissions insights ✨ Key Features AI Holistic Evaluation: Comprehensive analysis weighing academics, extracurriculars, essays, and institutional fit Intelligent Scoring System: 0-100 eligibility score with automated categorization and scholarship determination Structured Output: Consistent JSON schema with academic strength, admission likelihood, and decision reasoning Automated Communication: Personalized acceptance, interview, and rejection letters for every applicant Fallback Scoring: Manual GPA/SAT scoring if AI fails - ensures zero downtime Admin Alerts: Instant email notifications for exceptional high-scoring applicants (95+) Comprehensive Analytics: Track acceptance rates, average scores, scholarship distribution, and applicant demographics Customizable Criteria: Easy prompt editing to match your institution's values and requirements 💼 Perfect For Universities & Colleges: Processing 500+ undergraduate applications per semester Graduate Programs: Screening master's and PhD applications with consistent evaluation Private Institutions: Scaling admissions without expanding admissions staff Community Colleges: Handling high-volume transfer and new student applications International Offices: Evaluating global applicants 24/7 across all timezones Scholarship Committees: Identifying merit scholarship candidates automatically 🔧 What You'll Need Required Integrations Jotform - Application form with student data collection (free tier works) Create your form for free on Jotform using this link Create your application form with fields: Name, Email, Phone, GPA, SAT Score, Major, Essay, Extracurriculars OpenAI API - GPT-4o-mini for cost-effective AI evaluation (~$0.01-0.05 per application) Gmail - Automated applicant communication (acceptance, interview, rejection letters) Google Sheets - Application database and admissions analytics Optional Integrations Slack - Real-time alerts for exceptional applicants Calendar APIs - Automated interview scheduling Student Information System (SIS) - Push accepted students to enrollment system Document Analysis Tools - OCR for transcript verification 🚀 Quick Start Import Template - Copy JSON and import into n8n (requires LangChain support) Create Jotform - Use provided field structure (Name, Email, GPA, SAT, Major, Essay, etc.) Add API Keys - OpenAI, Jotform, Gmail OAuth2, Google Sheets Customize AI Prompt - Edit admissions criteria with your university's specific requirements and values Set Score Thresholds - Adjust auto-accept (95+), interview (70-94), reject (<70) cutoffs if needed Personalize Emails - Update templates with your university branding, dates, and contact info Create Google Sheet - Set up columns: id, Name, Email, GPA, SAT Score, Major, Essay, Extracurriculars Test & Deploy - Submit test application with pinned data and verify all nodes execute correctly 🎨 Customization Options Adjust Evaluation Weights: Change academics (40%), extracurriculars (25%), essay (20%), fit (15%) percentages Multiple Programs: Clone workflow for different majors with unique evaluation criteria Add Document Analysis: Integrate OCR for transcript and recommendation letter verification Interview Scheduling: Connect Google Calendar or Calendly for automated booking SIS Integration: Push accepted students directly to Banner, Ellucian, or PeopleSoft Waitlist Management: Add conditional routing for borderline scores (65-69) Diversity Tracking: Include demographic fields and bias detection in AI evaluation Financial Aid Integration: Automatically calculate need-based aid eligibility alongside merit scholarships 📈 Expected Results 90% reduction in manual application review time (from 2.5 hours to 15 minutes per application) 24-48 hour decision turnaround time vs 4-6 weeks traditional process 40% higher yield rate - faster responses increase enrollment commitment 100% consistency - every applicant evaluated with identical criteria Zero missed applications - automated tracking ensures no application falls through cracks Data-driven admissions - comprehensive analytics on applicant pools and acceptance patterns Better applicant experience - professional, timely communication regardless of decision Defensible decisions - documented scoring criteria for accreditation and compliance 🏆 Use Cases Large Public Universities Screen 5,000+ applications per semester, identify top 20% for auto-admit, route borderline to committee review. Selective Private Colleges Evaluate 500+ highly competitive applications, calculate merit scholarships automatically, schedule interviews with top candidates. Graduate Programs Process master's and PhD applications with research experience weighting, flag candidates for faculty review, automate fellowship awards. Community Colleges Handle high-volume open enrollment while identifying honors program candidates and scholarship recipients instantly. International Admissions Evaluate global applicants 24/7, account for different GPA scales and testing systems, respond same-day regardless of timezone. Rolling Admissions Provide instant decisions for early applicants, fill classes strategically, optimize scholarship budget allocation. 💡 Pro Tips Calibrate Your AI: After 100+ applications, refine evaluation criteria based on enrolled student success A/B Test Thresholds: Experiment with score cutoffs (e.g., 93 vs 95 for auto-admit) to optimize yield Build Waitlist Pipeline: Keep 70-84 score candidates engaged for spring enrollment or next year Track Source Effectiveness: Add UTM parameters to measure which recruiting channels deliver best students Committee Review: Route 85-94 scores to human admissions committee for final review Bias Audits: Quarterly review of AI decisions by demographic groups to ensure fairness Parent Communication: Add parent/guardian emails for admitted students under 18 Financial Aid Coordination: Sync scholarship awards with financial aid office for packaging 🎓 Learning Resources This workflow demonstrates: AI Agents with structured output** - LangChain integration for consistent JSON responses Multi-stage conditional routing** - IF nodes for three-tier decision logic Holistic evaluation** - Weighted scoring across multiple dimensions Automated communication** - HTML email templates with dynamic content Real-time notifications** - Admin alerts for high-value applicants Analytics and data logging** - Google Sheets integration for reporting Fallback mechanisms** - Manual scoring when AI unavailable Perfect for learning advanced n8n automation patterns in educational technology! 🔐 Compliance & Ethics FERPA Compliance: Protects student data with secure credential handling Fair Admissions: Documented criteria eliminate unconscious bias Human Oversight: Committee review option for borderline cases Transparency: Applicants can request evaluation criteria Appeals Process: Structured workflow for decision reconsideration Data Retention: Configurable Google Sheets retention policies 📊 What Gets Tracked Application submission date and time Complete student profile (GPA, test scores, major, essay, activities) AI eligibility score (0-100) and decision category Academic strength rating (excellent/strong/average) Scholarship eligibility and amount ($0-$20,000+) Admission likelihood (high/medium/low) Decision outcome (accepted/interview/rejected) Email delivery status and open rates Time from application to decision Ready to transform your admissions process? Import this template and start evaluating applications intelligently in under 1 hour. Questions or customization needs? The workflow includes detailed sticky notes explaining each section and comprehensive fallback logic for reliability.
by Adam Goodyer
Video Digestion Workflow — n8n Template Description How it works This workflow takes any YouTube video URL and automatically extracts a rich, structured analysis — including transcript, key visual moments, video metadata, SEO keywords, and content section breakdowns. It's designed as the foundation layer for content repurposing, feeding its output into downstream workflows for creating Shorts, LinkedIn posts, Twitter threads, blog articles, email newsletters, and more. The pipeline: YouTube URL Input — A simple form trigger accepts any YouTube video URL. Video Download (Apify) — Downloads the video file at 720p via the Apify YouTube Video Downloader actor. Transcript Extraction (Apify) — Pulls the full transcript with timestamps from YouTube using the Apify YouTube Video Transcript actor. No audio processing needed — fast and reliable. Data Consolidation — A Code node merges both Apify outputs into a single structured object containing: video URL, transcript text, timestamped segments, video metadata (title, description, duration, channel info, like/comment counts, thumbnail, publish date). Visual Analysis (Google Gemini Pro) — Sends the actual video to Gemini's video analysis endpoint, which watches the entire video and identifies key B-roll moments with precise timestamps, app detection, and webcam overlay awareness. It categorises clips as clean screen recordings vs. webcam overlays vs. talking head segments. Key Action Parsing — Filters and categorises the Gemini output into usable clips, removing talking-head-only segments and incomplete data. Outputs chronologically sorted clips with cropping metadata for downstream video editing. AI Section Analysis (OpenAI) — Sends the transcript + key moments to OpenAI with structured output (JSON schema) to generate: video summary, one-liner, main argument, target audience, content style, tone, key takeaways, problems addressed, tools mentioned, frameworks explained, suggested titles, and SEO keywords. Output — The final structured payload is ready to pass to any downstream workflow (e.g., Shorts creation, social media posting, blog generation). Setup guide Required accounts & API keys You'll need API credentials for the following services: | Service | What it does | Sign up | |---------|-------------|---------| | Apify | YouTube video downloading + transcript extraction | https://apify.com | | Google AI Studio (Gemini) | Video analysis — watches the video and detects key visual moments | https://aistudio.google.com | | OpenAI | Structured content analysis with JSON schema output | https://platform.openai.com | Required Apify actors You need to add these two Apify actors to your account: YouTube Video Downloader by epctex — https://apify.com/epctex/youtube-video-downloader YouTube Video Transcript by starvibe — https://apify.com/starvibe/youtube-video-transcript n8n credentials to configure Apify API** — Add your Apify API token in n8n credentials Google Gemini** — Add your Google AI Studio API key in n8n credentials OpenAI** — Add your OpenAI API key in n8n credentials Steps Import the workflow into n8n Configure all three credential sets (Apify, Gemini, OpenAI) Ensure both Apify actors are added to your Apify account Activate the workflow Open the form trigger URL and paste any YouTube video URL The workflow outputs a comprehensive JSON payload ready for downstream workflows What you can build with the output The structured output from this workflow is designed to be piped into other workflows. Some ideas: YouTube Shorts creation** — Use the key moments + timestamps to auto-clip and render short-form content LinkedIn carousel posts** — Pull key takeaways and section summaries Twitter/X threads** — Convert section breakdowns into threaded posts Blog articles** — Use the full transcript + structure as a draft foundation Email newsletters** — Summarise the video for your subscriber list SEO-optimised descriptions** — Auto-generate YouTube descriptions with keywords Nodes used Form Trigger (n8n built-in) Apify (x2 — video download + transcript) Code (x2 — data consolidation + key action parsing) Google Gemini (video analysis) OpenAI (structured content analysis with JSON schema) Edit Fields (data mapping) Execute Workflow (optional — calls downstream Shorts creation workflow) Built by @adamfreelances — The Anti-Guru Technical Educator. Real workflows, real implementation, no fluff.
by Onur
🏠 Extract Zillow Property Data to Google Sheets with Scrape.do This template requires a self-hosted n8n instance to run. A complete n8n automation that extracts property listing data from Zillow URLs using Scrape.do web scraping API, parses key property information, and saves structured results into Google Sheets for real estate analysis, market research, and property tracking. 📋 Overview This workflow provides a lightweight real estate data extraction solution that pulls property details from Zillow listings and organizes them into a structured spreadsheet. Ideal for real estate professionals, investors, market analysts, and property managers who need automated property data collection without manual effort. Who is this for? Real estate investors tracking properties Market analysts conducting property research Real estate agents monitoring listings Property managers organizing data Data analysts building real estate databases What problem does this workflow solve? Eliminates manual copy-paste from Zillow Processes multiple property URLs in bulk Extracts structured data (price, address, zestimate, etc.) Automates saving results into Google Sheets Ensures repeatable & consistent data collection ⚙️ What this workflow does Manual Trigger → Starts the workflow manually Read Zillow URLs from Google Sheets → Reads property URLs from a Google Sheet Scrape Zillow URL via Scrape.do → Fetches full HTML from Zillow (bypasses PerimeterX protection) Parse Zillow Data → Extracts structured property information from HTML Write Results to Google Sheets → Saves parsed data into a results sheet 📊 Output Data Points | Field | Description | Example | |-------|-------------|---------| | URL | Original Zillow listing URL | https://www.zillow.com/homedetails/... | | Price | Property listing price | $300,000 | | Address | Street address | 8926 Silver City | | City | City name | San Antonio | | State | State abbreviation | TX | | Days on Zillow | How long listed | 5 | | Zestimate | Zillow's estimated value | $297,800 | | Scraped At | Timestamp of extraction | 2025-01-29T12:00:00.000Z | ⚙️ Setup Prerequisites n8n instance (self-hosted) Google account with Sheets access Scrape.do account with API token (Get 1000 free credits/month) Google Sheet Structure This workflow uses one Google Sheet with two tabs: Input Tab: "Sheet1" | Column | Type | Description | Example | |--------|------|-------------|---------| | URLs | URL | Zillow listing URL | https://www.zillow.com/homedetails/123... | Output Tab: "Results" | Column | Type | Description | Example | |--------|------|-------------|---------| | URL | URL | Original listing URL | https://www.zillow.com/homedetails/... | | Price | Text | Property price | $300,000 | | Address | Text | Street address | 8926 Silver City | | City | Text | City name | San Antonio | | State | Text | State code | TX | | Days on Zillow | Number | Days listed | 5 | | Zestimate | Text | Estimated value | $297,800 | | Scraped At | Timestamp | When scraped | 2025-01-29T12:00:00.000Z | 🛠 Step-by-Step Setup Import Workflow: Copy the JSON → n8n → Workflows → + Add → Import from JSON Configure Scrape.do API: Sign up at Scrape.do Dashboard Get your API token In HTTP Request node, replace YOUR_SCRAPE_DO_TOKEN with your actual token The workflow uses super=true for premium residential proxies (10 credits per request) Configure Google Sheets: Create a new Google Sheet Add two tabs: "Sheet1" (input) and "Results" (output) In Sheet1, add header "URLs" in cell A1 Add Zillow URLs starting from A2 Set up Google Sheets OAuth2 credentials in n8n Replace YOUR_SPREADSHEET_ID with your actual Google Sheet ID Replace YOUR_GOOGLE_SHEETS_CREDENTIAL_ID with your credential ID Run & Test: Add 1-2 test Zillow URLs in Sheet1 Click "Execute workflow" Check results in Results tab 🧰 How to Customize Add more fields**: Extend parsing logic in "Parse Zillow Data" node to capture additional data (bedrooms, bathrooms, square footage) Filtering**: Add conditions to skip certain properties or price ranges Rate Limiting**: Insert a Wait node between requests if processing many URLs Error Handling**: Add error branches to handle failed scrapes gracefully Scheduling**: Replace Manual Trigger with Schedule Trigger for automated daily/weekly runs 📊 Use Cases Investment Analysis**: Track property prices and zestimates over time Market Research**: Analyze listing trends in specific neighborhoods Portfolio Management**: Monitor properties for sale in target areas Competitive Analysis**: Compare similar properties across locations Lead Generation**: Build databases of properties matching specific criteria 📈 Performance & Limits Single Property**: ~5-10 seconds per URL Batch of 10**: 1-2 minutes typical Large Sets (50+)**: 5-10 minutes depending on Scrape.do credits API Calls**: 1 Scrape.do request per URL (10 credits with super=true) Reliability**: 95%+ success rate with premium proxies 🧩 Troubleshooting | Problem | Solution | |---------|----------| | API error 400 | Check your Scrape.do token and credits | | URL showing "undefined" | Verify Google Sheet column name is "URLs" (capital U) | | No data parsed | Check if Zillow changed their HTML structure | | Permission denied | Re-authenticate Google Sheets OAuth2 in n8n | | 50000 character error | Verify Parse Zillow Data code is extracting fields, not returning raw HTML | | Price shows HTML/CSS | Update price extraction regex in Parse Zillow Data node | 🤝 Support & Community Scrape.do Documentation Scrape.do Dashboard Scrape.do Zillow Scraping Guide n8n Forum n8n Docs 🎯 Final Notes This workflow provides a repeatable foundation for extracting Zillow property data with Scrape.do and saving to Google Sheets. You can extend it with: Historical tracking (append timestamps) Price change alerts (compare with previous scrapes) Multi-platform scraping (Redfin, Realtor.com) Integration with CRM or reporting dashboards Important: Scrape.do handles all anti-bot bypassing (PerimeterX, CAPTCHAs) automatically with rotating residential proxies, so you only pay for successful requests. Always use super=true parameter for Zillow to ensure high success rates.
by Abdul Mir
Overview Stop spending hours formatting proposals. This workflow turns a short post-call form into a high-converting, fully-personalized PandaDoc proposal—plus updates your CRM and drafts the follow-up email for you. After a sales call, just fill out a 3-minute form summarizing key pain points, solutions pitched, and the price. The workflow uses AI to generate polished proposal copy, then builds a PandaDoc draft using dynamic data mapped into the JSON body (which you can fully customize per business). It also updates the lead record in ClickUp with the proposal link, company name, and quote—then creates an email draft in Gmail, ready to send. Who’s it for Freelancers and consultants sending service proposals Agencies closing deals over sales calls Sales reps who want to automate proposal follow-up Teams using ClickUp as their lightweight CRM How it works After a call, fill out a short form with client details, pitch notes, and price AI generates professional proposal copy based on form input Proposal is formatted and sent to PandaDoc via HTTP request ClickUp lead is updated with: Company Name Proposal URL Quote/price A Gmail draft is created using the proposal link and a thank-you message Example use case > You hop off a call, fill out: > - Prospect: Shopify agency > - Pain: No lead gen system > - Solution: Automated cold outreach > - Price: $2,500/month > > 3 minutes later: PandaDoc proposal is ready, CRM is updated, and your email draft is waiting to be sent. How to set up Replace the form with your preferred tool (e.g. Tally, Typeform) Connect PandaDoc API and structure your proposal template Customize the JSON body inside the HTTP request to match your business Link your ClickUp space and custom fields Connect Gmail (or other email tool) for final follow-up draft Requirements Form tool for capturing sales call notes OpenAI or LLM key for generating proposal copy PandaDoc API access ClickUp custom fields set up for lead tracking Gmail integration How to customize Customize your PandaDoc proposal fields in the JSON body of the HTTP node Replace ClickUp with another CRM like HubSpot or Notion Adjust AI tone (casual, premium, corporate) for proposal writing Add Slack or Telegram alerts when the draft is ready Add PDF generation or auto-send email step