by Onur
🏠 Extract Zillow Property Data to Google Sheets with Scrape.do This template requires a self-hosted n8n instance to run. A complete n8n automation that extracts property listing data from Zillow URLs using Scrape.do web scraping API, parses key property information, and saves structured results into Google Sheets for real estate analysis, market research, and property tracking. 📋 Overview This workflow provides a lightweight real estate data extraction solution that pulls property details from Zillow listings and organizes them into a structured spreadsheet. Ideal for real estate professionals, investors, market analysts, and property managers who need automated property data collection without manual effort. Who is this for? Real estate investors tracking properties Market analysts conducting property research Real estate agents monitoring listings Property managers organizing data Data analysts building real estate databases What problem does this workflow solve? Eliminates manual copy-paste from Zillow Processes multiple property URLs in bulk Extracts structured data (price, address, zestimate, etc.) Automates saving results into Google Sheets Ensures repeatable & consistent data collection ⚙️ What this workflow does Manual Trigger → Starts the workflow manually Read Zillow URLs from Google Sheets → Reads property URLs from a Google Sheet Scrape Zillow URL via Scrape.do → Fetches full HTML from Zillow (bypasses PerimeterX protection) Parse Zillow Data → Extracts structured property information from HTML Write Results to Google Sheets → Saves parsed data into a results sheet 📊 Output Data Points | Field | Description | Example | |-------|-------------|---------| | URL | Original Zillow listing URL | https://www.zillow.com/homedetails/... | | Price | Property listing price | $300,000 | | Address | Street address | 8926 Silver City | | City | City name | San Antonio | | State | State abbreviation | TX | | Days on Zillow | How long listed | 5 | | Zestimate | Zillow's estimated value | $297,800 | | Scraped At | Timestamp of extraction | 2025-01-29T12:00:00.000Z | ⚙️ Setup Prerequisites n8n instance (self-hosted) Google account with Sheets access Scrape.do account with API token (Get 1000 free credits/month) Google Sheet Structure This workflow uses one Google Sheet with two tabs: Input Tab: "Sheet1" | Column | Type | Description | Example | |--------|------|-------------|---------| | URLs | URL | Zillow listing URL | https://www.zillow.com/homedetails/123... | Output Tab: "Results" | Column | Type | Description | Example | |--------|------|-------------|---------| | URL | URL | Original listing URL | https://www.zillow.com/homedetails/... | | Price | Text | Property price | $300,000 | | Address | Text | Street address | 8926 Silver City | | City | Text | City name | San Antonio | | State | Text | State code | TX | | Days on Zillow | Number | Days listed | 5 | | Zestimate | Text | Estimated value | $297,800 | | Scraped At | Timestamp | When scraped | 2025-01-29T12:00:00.000Z | 🛠 Step-by-Step Setup Import Workflow: Copy the JSON → n8n → Workflows → + Add → Import from JSON Configure Scrape.do API: Sign up at Scrape.do Dashboard Get your API token In HTTP Request node, replace YOUR_SCRAPE_DO_TOKEN with your actual token The workflow uses super=true for premium residential proxies (10 credits per request) Configure Google Sheets: Create a new Google Sheet Add two tabs: "Sheet1" (input) and "Results" (output) In Sheet1, add header "URLs" in cell A1 Add Zillow URLs starting from A2 Set up Google Sheets OAuth2 credentials in n8n Replace YOUR_SPREADSHEET_ID with your actual Google Sheet ID Replace YOUR_GOOGLE_SHEETS_CREDENTIAL_ID with your credential ID Run & Test: Add 1-2 test Zillow URLs in Sheet1 Click "Execute workflow" Check results in Results tab 🧰 How to Customize Add more fields**: Extend parsing logic in "Parse Zillow Data" node to capture additional data (bedrooms, bathrooms, square footage) Filtering**: Add conditions to skip certain properties or price ranges Rate Limiting**: Insert a Wait node between requests if processing many URLs Error Handling**: Add error branches to handle failed scrapes gracefully Scheduling**: Replace Manual Trigger with Schedule Trigger for automated daily/weekly runs 📊 Use Cases Investment Analysis**: Track property prices and zestimates over time Market Research**: Analyze listing trends in specific neighborhoods Portfolio Management**: Monitor properties for sale in target areas Competitive Analysis**: Compare similar properties across locations Lead Generation**: Build databases of properties matching specific criteria 📈 Performance & Limits Single Property**: ~5-10 seconds per URL Batch of 10**: 1-2 minutes typical Large Sets (50+)**: 5-10 minutes depending on Scrape.do credits API Calls**: 1 Scrape.do request per URL (10 credits with super=true) Reliability**: 95%+ success rate with premium proxies 🧩 Troubleshooting | Problem | Solution | |---------|----------| | API error 400 | Check your Scrape.do token and credits | | URL showing "undefined" | Verify Google Sheet column name is "URLs" (capital U) | | No data parsed | Check if Zillow changed their HTML structure | | Permission denied | Re-authenticate Google Sheets OAuth2 in n8n | | 50000 character error | Verify Parse Zillow Data code is extracting fields, not returning raw HTML | | Price shows HTML/CSS | Update price extraction regex in Parse Zillow Data node | 🤝 Support & Community Scrape.do Documentation Scrape.do Dashboard Scrape.do Zillow Scraping Guide n8n Forum n8n Docs 🎯 Final Notes This workflow provides a repeatable foundation for extracting Zillow property data with Scrape.do and saving to Google Sheets. You can extend it with: Historical tracking (append timestamps) Price change alerts (compare with previous scrapes) Multi-platform scraping (Redfin, Realtor.com) Integration with CRM or reporting dashboards Important: Scrape.do handles all anti-bot bypassing (PerimeterX, CAPTCHAs) automatically with rotating residential proxies, so you only pay for successful requests. Always use super=true parameter for Zillow to ensure high success rates.
by Alejandro Scuncia
An extendable RAG template to build powerful, explainable AI assistants — with query understanding, semantic metadata, and support for free-tier tools like Gemini, Gemma and Supabase. Description This workflow helps you build smart, production-ready RAG agents that go far beyond basic document Q&A. It includes: ✅ File ingestion and chunking ✅ Asynchronous LLM-powered enrichment ✅ Filterable metadata-based search ✅ Gemma-based query understanding and generation ✅ Cohere re-ranking ✅ Memory persistence via Postgres Everything is modular, low-cost, and designed to run even with free-tier LLMs and vector databases. Whether you want to build a chatbot, internal knowledge assistant, documentation search engine, or a filtered content explorer — this is your foundation. ⚙️ How It Works This workflow is divided into 3 pipelines: 📥 Ingestion Upload a PDF via form Extract text and chunk it for embedding Store in Supabase vector store using Google Gemini embeddings 🧠 Enrichment (Async) Scheduled task fetches new chunks Each chunk is enriched with LLM metadata (topics, use_case, risks, audience level, summary, etc.) Metadata is added to the vector DB for improved retrieval and filtering 🤖 Agent Chat A user question triggers the RAG agent Query Builder transforms it into keywords and filters Vector DB is queried and reranked The final answer is generated using only retrieved evidence, with references Chat memory is managed via Postgres 🌟 Key Features Asynchronous enrichment** → Save tokens, batch process with free-tier LLMs like Gemma Metadata-aware** → Improved filtering and reranking Explainable answers** → Agent cites sources and sections Chat memory** → Persistent context with Postgres Modular design** → Swap LLMs, rerankers, vector DBs, and even enrichment schema Free to run** → Built with Gemini, Gemma, Cohere, Supabase (free tier-compatible) 🔐 Required Credentials |Tool|Use| |-|-|-| |Supabase w/ PostreSQL|Vector DB + storage| |Google Gemini/Gemma|Embeddings & LLM| |Cohere API|Re-ranking| |PostgreSQL|Chat memory| 🧰 Customization Tips Swap extractFromFile with Notion/Google Drive integrations Extend Metadata Obtention prompt to fit your domain (e.g., financial, legal) Replace LLMs with OpenAI, Mistral, or Ollama Replace Postgre Chat Memory with Simple Memory or any other Use a webhook instead of a form to automate ingestion Connect to Telegram/Slack UI with a few extra nodes 💡 Use Cases Company knowledge base bot (internal docs, SOPs) Educational assistant with smart filtering (by topic or level) Legal or policy assistant that cites source sections Product documentation Q&A with multi-language support Training material assistant that highlights risks/examples Content Generation 🧠 Who It’s For Indie developers building smart chatbots AI consultants prototyping Q&A assistants Teams looking for an internal knowledge agent Anyone building affordable, explainable AI tools 🚀 Try It Out! Deploy a modular RAG assistant using n8n, Supabase, and Gemini — fully customizable and almost free to run. 1. 📁 Prepare Your PDFs Use any internal documents, manuals, or reports in *PDF *format. Optional: Add Google Drive integration to automate ingestion. 2. 🧩 Set Up Supabase Create a free Supabase project Use the table creation queries included in the workflow to set up your schema. Add your *supabaseUrl *and *supabaseKey *in your n8n credentials. > 💡 Pro Tip: Make sure you match the embedding dimensions to your model. This workflow uses Gemini text-embedding-04 (768-dim) — if switching to OpenAI, change your table vector size to 1536. 3. 🧠 Connect Gemini & Gemma Use Gemini/Gemma for embeddings and optional metadata enrichment. Or deploy locally for lightweight async LLM processing (via Ollama/HuggingFace). 4. ⚙️ Import the Workflow in n8n Open n8n (self-hosted or cloud). Import the workflow file and paste your credentials. You’re ready to ingest, enrich, and query your document base. 💬 Have Feedback or Ideas? I’d Love to Hear This project is open, modular, and evolving — just like great workflows should be :). If you’ve tried it, built on top of it, or have suggestions for improvement, I’d genuinely love to hear from you. Let’s share ideas, collaborate, or just connect as part of the n8n builder community. 📧 ascuncia.es@gmail.com 🔗 Linkedin
by Wolf Bishop
A reliable, no-frills web scraper that extracts content directly from websites using their sitemaps. Perfect for content audits, migrations, and research when you need straightforward HTML extraction without external dependencies. How It Works This streamlined workflow takes a practical approach to web scraping by leveraging XML sitemaps and direct HTTP requests. Here's how it delivers consistent results: Direct Sitemap Processing: The workflow starts by fetching your target website's XML sitemap and parsing it to extract all available page URLs. This eliminates guesswork and ensures comprehensive coverage of the site's content structure. Robust HTTP Scraping: Each page is scraped using direct HTTP requests with realistic browser headers that mimic legitimate web traffic. The scraper includes comprehensive error handling and timeout protection to handle various website configurations gracefully. Intelligent Content Extraction: The workflow uses sophisticated JavaScript parsing to extract meaningful content from raw HTML. It automatically identifies page titles through multiple methods (title tags, Open Graph metadata, H1 headers) and converts HTML structure into readable text format. Framework Detection: Built-in detection identifies whether sites use WordPress, Divi themes, or heavy JavaScript frameworks. This helps explain content extraction quality and provides valuable insights about the site's technical architecture. Rich Metadata Collection: Each scraped page includes detailed metadata like word count, HTML size, response codes, and technical indicators. This data is formatted into comprehensive markdown files with YAML frontmatter for easy analysis and organization. Respectful Rate Limiting: The workflow includes a 3-second delay between page requests to respect server resources and avoid overwhelming target websites. The processing is sequential and controlled to maintain ethical scraping practices. Detailed Success Reporting: Every scraped page generates a report showing extraction success, potential issues (like JavaScript dependencies), and technical details about the site's structure and framework. Setup Steps Configure Google Drive Integration Connect your Google Drive account in the "Save to Google Drive" node Replace YOUR_GOOGLE_DRIVE_CREDENTIAL_ID with your actual Google Drive credential ID Create a dedicated folder for your scraped content in Google Drive Copy the folder ID from the Google Drive URL (the long string after /folders/) Replace YOUR_GOOGLE_DRIVE_FOLDER_ID_HERE with your actual folder ID in both the folderId field and cachedResultUrl Update YOUR_FOLDER_NAME_HERE with your folder's actual name Set Your Target Website In the "Set Sitemap URL" node, replace https://yourwebsitehere.com/page-sitemap.xml with your target website's sitemap URL Common sitemap locations include /sitemap.xml, /page-sitemap.xml, or /sitemap_index.xml Tip: Not sure where your sitemap is? Use a free online tool like https://seomator.com/sitemap-finder Verify the sitemap URL loads correctly in your browser before running the workflow Update Workflow IDs (Automatic) When you import this workflow, n8n will automatically generate new IDs for YOUR_WORKFLOW_ID_HERE, YOUR_VERSION_ID_HERE, YOUR_INSTANCE_ID_HERE, and YOUR_WEBHOOK_ID_HERE No manual changes needed for these placeholders Adjust Processing Limits (Optional) The "Limit URLs (Optional)" node is currently disabled for full site scraping Enable this node and set a smaller number (like 5-10) for initial testing For large websites, consider running in batches to manage processing time and storage Customize Rate Limiting (Optional) The "Wait Between Pages" node is set to 3 seconds by default Increase the delay for more respectful scraping of busy sites Decrease only if you have permission and the target site can handle faster requests Test Your Configuration Enable the "Limit URLs (Optional)" node and set it to 3-5 pages for testing Click "Test workflow" to verify the setup works correctly Check your Google Drive folder to confirm files are being created with proper content Review the generated markdown files to assess content extraction quality Run Full Extraction Disable the "Limit URLs (Optional)" node for complete site scraping Execute the workflow and monitor the execution log for any errors Large websites may take considerable time to process completely (plan for several hours for sites with hundreds of pages) Review Results Each generated file includes technical metadata to help you assess extraction quality Look for indicators like "Limited Content" warnings for JavaScript-heavy pages Files include word counts and framework detection to help you understand the site's structure Framework Compatibility: This scraper is specifically designed to work well with WordPress sites, Divi themes, and many JavaScript-heavy frameworks. The intelligent content extraction handles dynamic content effectively and provides detailed feedback about framework detection. While some single-page applications (SPAs) that render entirely through JavaScript may have limited content extraction, most modern websites including those built with popular CMS platforms will work excellently with this scraper. Important Notes: Always ensure you have permission to scrape your target website and respect their robots.txt guidelines. The workflow includes respectful delays and error handling, but monitor your usage to maintain ethical scraping practices.RetryClaude can make mistakes. Please double-check responses.
by Maxim Osipovs
This n8n workflow template implements an intelligent research paper monitoring system that automatically tracks new publications in ArXiv's Artificial Intelligence category, filters them for relevance using LLM-based analysis, generates structured summaries, and delivers a formatted email digest. The system uses a three-stage pipeline architecture: automated paper retrieval from ArXiv's API AI-powered relevance filtering and analysis via Google Gemini Intelligent summarization with HTML formatting for clean email delivery This eliminates the need to manually browse ArXiv daily while ensuring you only receive summaries of papers genuinely relevant to your research interests. What This Template Does (Step-by-Step) Runs on a configurable schedule (Tuesday-Friday) to fetch new papers from ArXiv's cs.AI category via their API. Uses Google Gemini with structured output parsing to analyze each paper's relevance based on your defined criteria for "applicable AI research." Generates concise, structured summaries for the three selected papers using a separate LLM call Aggregates three relevant paper's data and summaries into a single, readable digest Important Notes The workflow only runs Tuesday through Friday, as ArXiv typically doesn't publish new papers on weekends Customize the "Paper Relevance Analyzer" criteria to match your specific research interests Adjust the similarity threshold and selection logic to control how many papers are included in each digest Required Integrations: ArXiv API (no authentication required) Google Gemini API (for relevance analysis and summarization) Email service (SMTP or email provider like Gmail, SendGrid, etc.) Best For: 🎓 Academic researchers tracking AI developments in their field 💼 ML practitioners and data scientists staying current with new techniques 🧠 AI enthusiasts who want curated, digestible research updates 🏢 Technical teams needing regular competitive intelligence on emerging approaches Key Benefits: ✅ Automates daily ArXiv monitoring, saving 60+ minutes of manual research time ✅ Uses AI to pre-filter papers, reducing information overload by 80-90% ✅ Delivers structured, readable summaries instead of raw abstracts ✅ Fully customizable relevance criteria to match your specific interests ✅ Professional HTML formatting makes digests easy to scan and share ✅ Eliminates the risk of missing important papers in your field
by IranServer.com
Automate IP geolocation and HTTP port scanning with Google Sheets trigger This n8n template automatically enriches IP addresses with geolocation data and performs HTTP port scanning when new IPs are added to a Google Sheets document. Perfect for network monitoring, security research, or maintaining an IP intelligence database. Who's it for Network administrators, security researchers, and IT professionals who need to: Track IP geolocation information automatically Monitor HTTP service availability across multiple ports Maintain centralized IP intelligence in spreadsheets Automate repetitive network reconnaissance tasks How it works The workflow triggers whenever a new row containing an IP address is added to your Google Sheet. It then: Fetches geolocation data using the ip-api.com service to get country, city, coordinates, ISP, and organization information Updates the spreadsheet with the geolocation details Scans common HTTP ports (80, 443, 8080, 8000, 3000) to check service availability Records port status back to the same spreadsheet row, showing which services are accessible The workflow handles both successful connections and various error conditions, providing a comprehensive view of each IP's network profile. Requirements Google Sheets API access** - for reading triggers and updating data Google Sheets document** with at least an "IP" column header How to set up Create a Google Sheet with columns: IP, Country, City, Lat, Lon, ISP, Org, Port_80, Port_443, Port_8000, Port_8080, Port_3000 Configure Google Sheets credentials in both the trigger and update nodes Update the document ID in the Google Sheets Trigger and both Update nodes to point to your spreadsheet Test the workflow by adding an IP address to your sheet and verifying the automation runs How to customize the workflow Modify port list**: Edit the "Edit Fields" node to scan different ports by changing the ports array Add more geolocation fields**: The ip-api.com response includes additional fields like timezone, zip code, and AS number Change trigger frequency**: Adjust the polling interval in the Google Sheets Trigger for faster or slower monitoring Add notifications**: Insert Slack, email, or webhook nodes to alert when specific conditions are detected Filter results**: Add IF nodes to process only certain IP ranges or geolocation criteria
by WeblineIndia
Send daily applicant digest by role from Gmail to hiring managers with Google Gemini This workflow automatically collects all new job application emails from your Gmail labeled as applicants in the last 24 hours. Every day at 6:00 PM (Asia/Kolkata), it extracts structured details (name, email, phone, role, experience, skills, location, notice, summary) from each applicant (using Gemini AI or OpenAI). It then groups applicants by role and manager, compiles a neat HTML table digest for each manager and emails them a single summary — so hiring managers get everything they need, at a glance, in one place. Who’s It For Recruiters and hiring managers tired of digging through multiple application threads. Small HR teams / agencies not yet on a full applicant tracking system. Anyone wanting a consolidated, role-targeted applicant update each day. Teams that want to automate candidate triage using Google Workspace and AI. How It Works Schedule Trigger (6PM IST): Runs automatically at 18:00 India time. Fetch Applicant Emails: Reads Gmail for emails labeled 'applicants' from the past 24 hours. Prepare Email Text: Converts email content to plain text for reliable AI extraction. Extract Applicant Details: Gemini/OpenAI extracts applicant’s info in structured JSON. Assign Manager Emails: Routes each applicant to the correct manager via role→email mapping or fallback. Group & Build HTML Tables: Organizes applicants by manager and role, builds summary tables. Send Digest to Managers: Sends each manager one HTML summary email for their new applicants. How to Set Up Create/verify Gmail label applicants and set up filters to route job emails there. Import the workflow: Use your Google/Gmail and Gemini/OpenAI accounts as credentials. Configure connections: Gmail with OAuth2 (IMAP not required, uses Gmail API) Gemini or OpenAI API key for extraction Set role→manager mapping in the “Assign Manager Emails” node (just edit the map!). Adjust time / defaults: Edit schedule and fallback email if you wish. Test it: Send yourself a test application, label it, check workflow logs. Requirements Gmail account (with OAuth2 enabled and 'applicants' label set up) Gemini or OpenAI API key for structured AI extraction n8n instance (self-hosted or cloud) SMTP credentials (if using direct email instead of Gmail node) At least one valid hiring manager email mapped to a role How to Customize the Workflow Centralize config with a Set node (label name, fallback/manager email, model name, schedule). Add attachment-to-text conversion for applications with resume attachments. Normalize role names in the mapping code for more robust routing. Enable additional delivery: Slack, Teams, Google Sheets log, extra Cron for mid-day urgents. Refine AI extraction prompt for specific fields (add portfolio URL, etc.). Change schedule for daily, weekly or per-role timing. Add‑Ons / Extensions Resume Text Extraction:** Add PDF/DOCX to text parsing for attachment-only applications. ChatOps:** Send the summary to Slack or Teams channels along with/instead of email. Applicant Logging:** Auto-log every applicant/action into Google Sheets, Notion or Airtable. Multi-timezone:** Duplicate/modify the Cron trigger for different manager regions or urgency levels. Use Case Examples Tech Hiring:** Java, Python, Frontend candidates are automatically routed to their respective leads. Small Agency:** All applications summarized for reviewers, with per-role breakdowns. HR Operations:** Daily rollups sent before hiring sync, facilitating fast decision-making. Common Troubleshooting | Issue | Possible Cause | Solution | |-----------------------------------------|----------------------------------------------------------|-------------------------------------------------------------| | No emails processed | No 'applicants' label or wrong time window | Check Gmail filters and adjust search query in fetch node | | All digests go to fallback manager | Incorrect or missing role → manager mapping | Normalize role text in assignment node, expand map | | AI Extraction returns bad/missing JSON | Wrong prompt, high temperature or missing field names | Tighten prompt, lower temperature, check example response | | Duplicate/Old Emails appear | Date filter not correct | Use 'newer_than:1d' and keep 'mark as read' in email node | | SMTP/Gmail Send errors | Auth problem, quota or app password missing | Use OAuth2, check daily send caps and app password settings | | Blank or partially filled summary table | AI unable to parse poorly formatted/empty email | Improve sender email consistency, add fallback handling | | Attachments not processed | No attachment extraction node | Add attachment-to-text parsing before AI node | Need Help? If you get stuck, need help customizing a mapping or adding nodes or want to integrate extra steps (e.g., resume text, Slack), just ask! We're happy to guide you step by step, review your workflow, or help you troubleshoot any errors. Contact WeblineIndia — Your n8n Automation partner!
by Davide
This workflow automates analyzing Gmail threads and drafting AI-powered replies with the new model Anthropic Sonnet 4.5. This workflow automates the process of analyzing incoming emails and generating context-aware draft replies by examining the entire email thread. Key Advantages ✅ Time-Saving – Automates repetitive email replies, reducing manual workload. ✅ Context-Aware Responses – Replies are generated using the entire email thread, not just the latest message. ✅ Smart Filtering – The classifier prevents unnecessary drafts for spam or promotional emails. ✅ Human-in-the-Loop – Drafts are created instead of being sent immediately, allowing manual review and corrections. ✅ Scalable & Flexible – Can be adapted to different accounts, reply styles, or workflows. ✅ Seamless Gmail Integration – Directly interacts with Gmail threads and drafts via OAuth. How it Works This workflow automates the process of analyzing incoming emails and generating context-aware draft replies by examining the entire email thread. Trigger & Initial Filtering: The workflow is automatically triggered every minute by the Gmail Trigger node, which detects new emails. For each new email, it immediately performs a crucial first step: it uses an AI Email Classifier to analyze the email snippet. The AI determines if the email is a legitimate message that warrants a reply (categorized as "ok") or if it's spam, a newsletter, or an advertisement. This prevents the system from generating replies for unwanted emails. Context Aggregation: If an email is classified as "ok," the workflow fetches the entire conversation thread from Gmail using the threadId. A Code Node then processes all the messages in the thread, structuring them into a consistent format that the AI can easily understand. AI-Powered Draft Generation: The structured conversation history is passed to the Replying email Agent with Sonnet 4.5. This agent, powered by a language model, analyzes the entire thread to understand the context and the latest inquiry. It then drafts a relevant and coherent HTML email reply. The system prompt instructs the AI not to invent information and to use placeholders for any missing details. Draft Creation: The final step takes the AI-generated reply and the original email's metadata (subject, recipient, threadId) and uses them to create a new draft email in Gmail. This draft is automatically placed in the correct email thread, ready for the user to review and send. Set up Steps To implement this automated email reply system, you need to configure the following: Configure Gmail & OpenAI Credentials: Ensure the following credentials are set up in your n8n instance: Gmail OAuth2 Credentials: The workflow uses the same Gmail account for the trigger, fetching threads, and creating drafts. Configure this in the "Gmail Trigger," "Get a thread," and "Create a draft" nodes. OpenAI API Credentials: Required for both the "Email Classifier". Provide your API key in the respective OpenAI Chat Model nodes. Anthropic API Credentials: Required for the main "Replying email Agent." Provide your API key in the respective Antrhopic Chat Model nodes. Review AI Classification & Prompting: Email Filtering: Check the categories in the Email Classifier node. The current setup marks only non-advertising, non-newsletter emails as "ok." You can modify these categories to fit your specific needs and reduce false positives. Reply Agent Instructions: Review the system message in the Replying email Agent. You can customize the AI's persona, tone, and instructions (e.g., making it more formal, or instructing it to sign with a specific name) to better align with your communication style. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by n8n Automation Expert | Template Creator | 2+ Years Experience
🌤️ Automated Indonesian Weather Monitoring with Smart Notifications Stay ahead of weather changes with this comprehensive monitoring system that fetches real-time data from Indonesia's official meteorological agency (BMKG) and delivers beautiful, actionable weather reports directly to your Telegram. ⚡ What This Workflow Does This intelligent weather monitoring system automatically: Fetches Official Data**: Connects to BMKG's public weather API for accurate Indonesian forecasts Smart Processing**: Analyzes temperature, humidity, precipitation, and wind conditions Risk Assessment**: Generates contextual warnings for extreme weather conditions Automated Alerts**: Sends formatted weather reports to Telegram every 6 hours Error Handling**: Includes robust error detection and notification system 🎯 Perfect For Local Communities**: Keep neighborhoods informed about weather changes Business Operations**: Plan outdoor activities and logistics based on weather Emergency Preparedness**: Receive early warnings for extreme weather conditions Personal Planning**: Never get caught unprepared by sudden weather changes Agricultural Monitoring**: Track conditions affecting farming and outdoor work 🛠️ Key Features 🔄 Automated Scheduling**: Runs every 6 hours with manual trigger option 📊 Comprehensive Reports**: Current conditions + 6-hour detailed forecasts ⚠️ Smart Warnings**: Contextual alerts for temperature extremes and rain probability 🎨 Beautiful Formatting**: Rich Telegram messages with emojis and structured data 🔧 Error Recovery**: Automatic error handling with notification system 📍 Location-Aware**: Supports any Indonesian location via BMKG regional codes 📋 What You'll Get Each weather report includes: Current temperature, humidity, and weather conditions 6-hour detailed forecast with timestamps Wind speed and direction information Rain probability and visibility data Personalized warnings and recommendations Average daily statistics and trends 🚀 Setup Requirements Telegram Bot Token**: Create a bot via @BotFather Chat ID**: Your personal or group chat identifier BMKG Location Code**: Regional administrative code for your area 💡 Pro Tips Customize the location by changing the adm4 parameter in the HTTP request Adjust scheduling interval based on your monitoring needs Modify warning thresholds in the processing code Add multiple chat IDs for broader distribution Integrate with other n8n workflows for advanced automation 🌟 Why Choose This Template Production Ready**: Includes comprehensive error handling and logging Highly Customizable**: Easy to modify for different locations and preferences Official Data Source**: Uses Indonesia's trusted meteorological service User-Friendly Output**: Clean, readable reports perfect for daily use Scalable Design**: Easily extend for multiple locations or notification channels Transform your weather awareness with this professional-grade monitoring system that brings Indonesia's official weather data right to your fingertips! Keywords: weather monitoring, BMKG API, Telegram notifications, Indonesian weather, automated alerts, meteorological data, weather forecasting, n8n automation, weather API integration
by Olivier
This template syncs prospects from ProspectPro into HubSpot. It checks if a company already exists in HubSpot (by ProspectPro ID or domain), then updates the record or creates a new one. Sync results are logged back in ProspectPro with tags to prevent duplicates and mark errors, ensuring reliable and repeatable integrations. ✨ Features Automatically sync ProspectPro prospects to HubSpot companies Smart search logic: match by ProspectPro ID first, then by domain Creates new HubSpot companies when no match is found Updates existing HubSpot companies with latest ProspectPro data Logs sync results back into ProspectPro with tags (HubspotSynced, HubspotSyncFailed) Extendable and modular: use as a trigger workflow or callable sub-flow ⚙ Requirements n8n instance or cloud workspace Install the ProspectPro Verified Community Node ProspectPro account & API credentials (14-day free trial) HubSpot account with OAuth2 app and API credentials 🔧 Setup Instructions Import the template and set your credentials (ProspectPro, HubSpot). Connect to a trigger (e.g., ProspectPro "New website visitor") or call as a sub-workflow. Add a propery to Hubspot for the ProspectPro ID if you don't already have one Adjust sync logic in the "Continue?"-node and HubSpot fields to match your setup. Optional: extend error handling, add Slack/CRM notifications, or sync back HubSpot data into ProspectPro. 🔐 Security Notes Prevents re-processing of failed syncs using the HubspotSyncFailed tag Error branches included for failed updates/creates Manual resolution required if sync errors persist 🧪 Testing Run with a ProspectPro ID of a company with a known domain Check HubSpot for creation or update of the company record Verify updated tags (HubspotSynced / HubspotSyncFailed) in ProspectPro 📌 About ProspectPro ProspectPro is a B2B Prospecting Platform for Dutch SMEs. It helps sales teams identify prospects, track website visitors, and streamline sales without a full CRM. Website: https://www.prospectpro.nl Platform: https://mijn.prospectpro.nl API docs: https://www.docs.bedrijfsdata.nl Support: https://www.prospectpro.nl/klantenservice Support hours: Monday–Friday, 09:00–17:00 CET 📌 About HubSpot HubSpot is a leading CRM platform offering marketing, sales, and customer service tools. It helps companies manage contacts, automate workflows, and grow their customer base. Website: https://www.hubspot.com Developer Docs: https://developers.hubspot.com
by Rully Saputra
Automate Lighthouse report alerts to messenger and Google Sheets Who’s it for This workflow is ideal for developers, SEO specialists, performance engineers, and digital agencies who want to continuously monitor website performance using Core Web Vitals. It’s also perfect for product or infrastructure teams that need real-time alerts when a site underperforms and want a historical log of reports in Google Sheets. What it does This automation periodically fetches a Lighthouse report from the PageSpeed Insights API, checks whether any of the Core Web Vitals (CWV) scores fall below a defined threshold, and sends a notification to Telegram (or any other preferred messenger). It also logs the summarized report in a specific row within a Google Spreadsheet for long-term tracking and reporting. The CWV audit results are processed using JavaScript and passed through a summarization step using Gemini Chat, making the audit descriptions concise and actionable. How to set up Configure the Schedule Trigger node to run at your preferred frequency. Set your target URLs and API Key, then connect the HTTP Request node to Google PageSpeed Insights. Update the JavaScript Code node to filter and parse only CWV metrics. Define thresholds in the IF Node to trigger Telegram messages only when needed. Connect your Telegram (or other messenger) credentials. Set up the Google Sheets node by linking your account and choosing the sheet and range to log data. Requirements Google account with access to Google Sheets Telegram bot token or any preferred messenger API key for PageSpeed Insights Gemini Chat integration (optional for summarization, can be replaced or removed) How to customize the workflow Swap Telegram for Slack, Discord, or email by replacing the Send Notification node. Adjust the CWV thresholds or include other Lighthouse metrics by modifying the IF Node and JavaScript logic. Add multiple URLs to monitor by introducing a loop or extending the schedule with different endpoints. Replace the Gemini Chat model with OpenAI, Claude, or your custom summarizer if needed.
by Rahul Joshi
📘 Description: This workflow automates the entire release note creation and announcement process whenever a task status changes in ClickUp. Using Azure OpenAI GPT-4o, Notion, Slack, Gmail, and Google Sheets, it converts technical task data into clear, structured, and branded release notes — ready for documentation and team broadcast. The flow captures task details, generates Markdown-formatted FAQs, documents them in Notion, formats professional Slack messages, and notifies the task owner via HTML email. Any failed payloads or validation errors are logged automatically to Google Sheets for full traceability. The result is a zero-touch release workflow that saves time, keeps communication consistent, and ensures every completed feature is clearly documented and shared. ⚙️ What This Workflow Does (Step-by-Step) 🟢 ClickUp Task Status Trigger Listens for task status updates (e.g., In Review → Complete) within the specified ClickUp team. Whenever a task reaches a completion state, this node starts the release note workflow automatically. 🔍 Validate ClickUp Payload (IF Node) Checks that the incoming ClickUp webhook contains a valid task_id. ✅ True Path: Proceeds to fetch task details. ❌ False Path: Logs the invalid payload to Google Sheets for review. 📋 Fetch Task Details from ClickUp Retrieves full information about the task using the task_id, including title, description, status, assignee, priority, and custom fields. Provides complete task context for AI processing. 🧩 Parse Task Details in JavaScript Cleans and standardizes task data into JSON format with fields like title, description, priority, owner, due date, and task URL. Also extracts optional links (e.g., GitHub references). Ensures consistent, structured input for the AI model. 🧠 Configure GPT-4o Model (Azure OpenAI) Initializes GPT-4o as the core reasoning engine for FAQ and release-note generation, ensuring context-aware and concise output. 🤖 Generate Release Notes FAQ (AI Agent) Transforms task details into a Markdown-formatted release note under four standardized sections: 1️⃣ What changed 2️⃣ Why 3️⃣ How to use 4️⃣ Known issues Each section is written clearly and briefly for internal and external readers. 📘 Save Release Notes to Notion Creates a new page in the Notion “Release Notes” database. Includes task URL, owner, status, priority, and the full AI-generated FAQ content. Serves as the single source of truth for changelogs and release documentation. 💬 Configure GPT-4o Model (Slack Formatting) Prepares another GPT-4o model instance for formatting Slack-ready announcements in a professional and brand-consistent tone. 🎨 Generate Slack Release Announcement (AI Agent) Converts the Notion release information into a polished Slack message. Adds emojis, bullet points, and a clickable task URL — optimized for quick team consumption. 📢 Announce Release in Slack Posts the AI-formatted message directly to the internal Slack channel, notifying the team of the latest feature release. Keeps everyone aligned without manual drafting or posting. 📨 Send Acknowledgment Email to Assignee (Gmail Node) Sends an automated HTML email to the task owner confirming that their release is live. Includes task name, status, priority, release date, quick links to Notion and ClickUp, and a preview of the AI-generated FAQ. Delivers a professional confirmation while closing the communication loop. 🚨 Log Errors in Google Sheets Captures all payload validation errors, API failures, or processing exceptions into an “Error Log Sheet.” Ensures complete auditability and smooth maintenance of the workflow. 🧩 Prerequisites ClickUp API credentials (for task triggers & data fetch) Azure OpenAI (GPT-4o) credentials Notion API integration (for release documentation) Slack API connection (for announcements) Gmail API access (for acknowledgment emails) Google Sheets API access (for error logging) 💡 Key Benefits ✅ Converts completed tasks into professional release notes automatically ✅ Publishes directly to Notion with consistent documentation ✅ Broadcasts updates to Slack in clean, branded format ✅ Notifies assignees instantly via personalized HTML email ✅ Maintains transparent error tracking in Google Sheets 👥 Perfect For Product & Engineering Teams managing frequent feature releases SaaS companies automating changelog and release documentation Project managers maintaining internal knowledge bases Teams using ClickUp, Notion, Slack, and Gmail for daily operations
by Samir Saci
Tags: Image Compression, Tinify API, TinyPNG, SEO Optimisation, E-commerce, Marketing Context Hi! I’m Samir — Supply Chain Engineer, Data Scientist based in Paris, and founder of LogiGreen. I built this workflow for an agency specialising in e-commerce to automate the daily compression of their images stored in a Google Drive folder. This is particularly useful when managing large libraries of product photos, website assets or marketing visuals that need to stay lightweight for SEO, website performance or storage optimisation. > Test this workflow with the free tier of the API! 📬 For business inquiries, you can find me on LinkedIn Who is this template for? This template is designed for: E-commerce managers** who need to keep product images optimised Marketing teams** handling large volumes of visuals Website owners** wanting automatic image compression for SEO Anyone using Google Drive** to store images that gradually become too heavy What does this workflow do? This workflow acts as an automated image compressor and reporting system using Tinify, Google Drive, and Gmail. Runs every day at 08:00 using a Schedule Trigger Fetches all images from the Google Drive Input folder Downloads each file and sends it to the Tinify API for compression Downloads the optimised image and saves it to the Compressed folder Moves the original file to the Original Images archive Logs: fileName, originalSize, compressedSize, imageId, outputUrl and processingId into a Data Table After processing, it retrieves all logs for the current batch Generates a clean HTML report summarising the compression results Sends the report via Gmail, including total space saved Here is an example from my personal folder: Here is the report generated for these images: P.S.: You can customise the report to match your company branding or visual identity. 🎥 Tutorial A complete tutorial (with explanations of every node) is available on YouTube: Next Steps Before running the workflow, follow the sticky notes and configure the following: Get your Tinify API key for the free tier here: Get your key Replace Google Drive folder IDs in: Input, Compressed, and Original Images Replace the Data Table reference with your own (fields required: fileName, originalSize, compressedSize, imageId, outputUrl, processingId) Add your Tinify API key in the HTTP Basic Auth credentials Set up your Gmail credentials and recipient email (Optional) Customise the HTML report in the Generate Report Code node (Optional) Adjust the daily schedule to your preferred time Submitted: 18 November 2025 Template designed with n8n version 1.116.2