by Harshil Agrawal
This workflow allows you to create, update, and get an item from Webflow. Webflow node: This node will create a new collection of the type Team Members in Webflow. If you want to create a collection with a different type, use that type instead. Webflow1 node: This node will update the item that we created using the previous node. Webflow2 node: This node will retrieve the information of the object that we created earlier.
by Harshil Agrawal
This workflow allows you to create, update, and retrieve a record from FileMaker. FileMaker node: This node will create a new record in FileMaker. FileMaker1 node: This node will add a new field to the record that we created in the previous node. FileMaker2 node: This node will get the information about the record that we created earlier.
by Harshil Agrawal
This workflow allows you to add, commit, and push changes to a git repository. Git node: This node will add the README.md file to the staging area. If you want to add a different file, enter the path of that file instead. Git1 node: This node will commit all the changes that were added to the staging area by the previous node. Git2 node: This node will return the commit logs of your repository. Git3 node: This node will push the changes to a cloud repository.
by Muhammad Sajid
TruePeopleSearch Scraper for Skip Tracers Enrich any list of people with verified contact info using this workflow. This n8n automation scrapes TruePeopleSearch using Zyte's extraction API to safely bypass bot protection and extract detailed profiles. It’s built for data brokers, skip tracers, and real estate professionals who need clean contact data (phone, email, address) from names alone — even when the main profile is empty. If the original profile lacks a phone number, the workflow intelligently scrapes one of their listed relatives instead — giving you the best possible chance of finding a valid number. What this workflow does Pulls lead data (first name, last name, and custom search URL) from a Google Sheet Sends the TruePeopleSearch search URL to Zyte’s Scraping API to retrieve search results HTML Parses the first matching profile link from the results (by full name > first name > last name) Visits that profile page and extracts: Full Name Age / Date of Birth Primary Phone Number Other Phone Numbers Email Addresses Current Address If no phone numbers are found: Detects a relative's profile link Scrapes the relative’s profile for fallback contact data Writes all scraped information (or empty fields) back into the same row in Google Sheets You’ll need n8n (self-hosted or cloud)** To run and automate the workflow Zyte Scraping API** A Zyte account + API key to access their /extract endpoint (Use HTTP Basic Auth in the HTTP Request node) Google Sheets integration** Your own lead sheet with headers like: row_number (used to write back to the correct row) First Name Last Name SearchURL (Search by Address) Basic JavaScript familiarity (optional)** To tweak the HTML parsing logic for profile structure changes Example Google Sheet Use this Google Sheet as a template for your inputs and outputs: 👉 TruePeopleSearch Lead Template (Google Sheet) Disclaimer TruePeopleSearch may change its structure or block heavy scraping — always test at small scale first This workflow is built to simulate human behavior via Zyte’s smart rendering — scraping is still subject to site limitations Use ethically and within your local data usage laws Categories Data Enrichment · Scraping Automation · Lead Generation · Skip Tracing Feel free to drop me an email if you need help with building a custom scraping automation for your business at sajid@marketingbyprof.com
by WeblineIndia
⚙️ Advanced Equipment Health Monitor with MS Teams Integration (n8n | API | Google Sheets | MSTeams) This n8n workflow automatically monitors equipment health by fetching real-time metrics like temperature, voltage and operational status. If any of these parameters cross critical thresholds, an alert is instantly sent to a Microsoft Teams channel and the event is logged in Google Sheets. The workflow runs every 15 minutes by default. ⚡ Quick Implementation Steps Import the workflow JSON into your n8n instance. Open the "Set Config" node and update: API endpoint Teams webhook URL Threshold values Google Sheet ID Activate the workflow to start receiving alerts every 15 minutes. 🎯 Who’s It For Renewable energy site operators (solar, wind) Plant maintenance and operations teams Remote infrastructure monitoring services IoT-integrated energy platforms Enterprise environments using Microsoft Teams 🛠 Requirements | Tool | Purpose | |------|---------| | n8n Instance | To run and schedule automation | | HTTP API | Access to your equipment or IoT platform health API | | Microsoft Teams | Incoming Webhook URL configured | | Google Sheets | Logging and analytics | | SMTP (optional) | For email-based alternatives or expansions | 🧠 What It Does Runs every 15 minutes** to check the latest equipment metrics. Compares values** (temperature, voltage, status) against configured thresholds. Triggers a Microsoft Teams message** when a threshold is breached. Appends the alert data** to a Google Sheet for logging and review. 🧩 Workflow Components Set Node:** Configures thresholds, endpoints, webhook URL and Sheet ID. Cron Node:** Triggers the check every 15 minutes. HTTP Request Node:** Pulls data from your equipment health monitoring API. IF Node:** Evaluates if conditions are within or outside defined limits. MS Teams Alert Node:** Sends structured alerts using a Teams incoming webhook. Google Sheets Node:** Logs alert details for recordkeeping and analytics. 🔧 How To Set Up – Step-by-Step Import Workflow: In n8n, click Import and upload the provided .json file. Update Configurations: Open the Set Config node. Replace the placeholder values: apiEndpoint: URL to fetch equipment data. teamsWebhookUrl: Your MS Teams channel webhook. temperatureThreshold: Example = 80 voltageThreshold: Example = 400 googleSheetId: Google Sheet ID (must be shared with n8n service account). Check Webhook Integration: Ensure your MS Teams webhook is properly authorized and points to a live channel. Run & Monitor: Enable the workflow and view logs/alerts. Adjust thresholds as needed. 🧪 How To Customize | Customization | How | |---------------|-----| | Add more parameters (humidity, pressure) | Extend the HTTP + IF node conditions | | Change alert frequency | Edit the Cron node | | Use Slack or Email instead of Teams | Replace MS Teams node with Slack or Email node | | Add PDF Report Generation | Use HTML → PDF node and email the report | | Export to Database | Add a PostgreSQL or MySQL node instead of Google Sheets | ➕ Add‑ons (Advanced) | Add-on | Description | |--------|-------------| | 📦 Auto-Ticketing | Auto-create issues in Jira, Trello or ClickUp for serious faults | | 📊 Dashboard Sync | Send real-time logs to BigQuery or InfluxDB | | 🧠 Predictive Alerts | Use machine learning APIs to flag anomalies | | 🗂 Daily Digest | Compile all incidents into a daily summary email or Teams post | | 📱 Mobile Alert | Integrate Twilio for SMS alerts or WhatsApp notifications | 📈 Example Use Cases Monitor solar inverter health for overheating or voltage drops. Alert field engineers via Teams when a wind turbine sensor fails. Log and visualize hardware issues for weekly analytics. Automate SLA compliance tracking through timely notifications. Ensure distributed infrastructure (e.g., substations) are always in operational range. 🧯 Troubleshooting Guide | Issue | Possible Cause | Solution | |-------|----------------|----------| | No Teams alert | Incorrect webhook URL or formatting | Recheck the Teams webhook and payload | | Workflow not triggering | Cron node misconfigured | Ensure it’s set to run every 15 mins and workflow is active | | Google Sheet not updating | Sheet ID is wrong or not shared | Share Sheet with your n8n Google service account | | No data from API | Endpoint URL is down or wrong | Test the endpoint manually with Postman or browser | 📞 Need Assistance? Need help tailoring this to your exact equipment type or expanding the workflow? 👉 Contact WeblineIndia – Expert automation partners for renewable energy, infrastructure and enterprise workflows.
by Nguyen Thieu Toan
🎬 TikTok Video Downloader (No Watermark) - Telegram Bot > Download TikTok videos instantly without watermarks via Telegram > Fast, reliable, and user-friendly automated workflow ✨ What This Workflow Does This powerful automation turns your Telegram bot into a TikTok video downloader. Simply send any TikTok link, and the bot will: ✅ Validate the URL automatically ⚡ Extract video without watermark 📊 Display video statistics (views, likes, author) 🚀 Send the clean video file directly to you No ads. No watermarks. Pure automation magic. 🎯 Key Features | Feature | Description | |---------|-------------| | 🔍 Smart Validation | Automatically checks if the link is a valid TikTok URL | | 💬 Real-time Feedback | Keeps users informed with status messages at every step | | ⚠️ Error Handling | Catches and explains errors in user-friendly language | | 📈 Video Analytics | Shows author name, view count, and likes | | 🎥 High Quality | Downloads original video quality without TikTok watermark | | ⚡ Fast Processing | Optimized HTTP requests with proper headers and timeouts | 🔧 How It Works Workflow Flow Diagram 📱 User sends TikTok link ↓ ✅ URL Validation ├─ Valid → Continue └─ Invalid → Send error message ↓ 💬 Send "Processing..." status ↓ 🌐 Fetch TikTok page HTML ↓ 🔍 Extract video URL from page data ↓ ⬇️ Download video file (no watermark) ↓ 📤 Send video to user with stats Technical Process Trigger Reception: Telegram webhook receives user message URL Validation: IF node checks for tiktok.com or vm.tiktok.com domains User Feedback: Bot sends "uploading video..." chat action + status message Variable Configuration: Stores chat ID and video URL for later use HTML Fetching: HTTP request to TikTok with browser-like headers Data Extraction: JavaScript code parses UNIVERSAL_DATA_FOR_REHYDRATION JSON Video Download: HTTP request with proper cookies and referrer headers Delivery: Telegram sends video file with formatted caption including stats Error Handling Strategy Each critical node (HTTP requests, code execution) has error output enabled: On Success**: Continues to next processing step On Error**: Routes to "Format Error" → "Send Error Message" path User Experience**: Clear, actionable error messages instead of silent failures 🚀 Set Up Steps Prerequisites ✅ n8n instance (v1.116.0 or higher) ✅ Telegram Bot Token (Create via @BotFather) ✅ Basic understanding of n8n workflows Step 1: Import Workflow Copy the workflow JSON In n8n, click "+ Add workflow" → "Import from JSON" Paste the JSON and click "Import" Step 2: Configure Telegram Credentials Click on any Telegram node Select "Create New Credential" in the Credentials dropdown Enter your Bot Token from @BotFather Click "Save" All Telegram nodes will automatically use this credential Step 3: Enable Error Handling ⚠️ CRITICAL You MUST manually configure error outputs on these 3 nodes: Node: "Get TikTok Page HTML" Click the node → Settings tab Find "On Error" section Select "Continue With Error Output" Click Save Node: "Extract Video URL" Click the node → Settings tab Set "On Error" to "Continue With Error Output" Click Save Node: "Download Video File" Click the node → Settings tab Set "On Error" to "Continue With Error Output" Click Save > 💡 Why? n8n cannot import error handling settings via JSON. This manual step ensures errors are caught instead of crashing the workflow. Step 4: Activate Workflow Click the "Active" toggle in the top-right corner The workflow is now listening for Telegram messages Step 5: Test Your Bot Open Telegram and find your bot Send a TikTok link like: https://www.tiktok.com/@user/video/123456789 Watch the magic happen! 🎉 🧪 Testing Scenarios | Test Case | Input | Expected Output | |-----------|-------|----------------| | Valid Video | Working TikTok link | ✅ Video file + stats caption | | Invalid URL | hello world | ❌ "Please send valid TikTok link" | | Deleted Video | Link to deleted video | ❌ "Video data not found" error | | Private Video | Private account video | ❌ "Video may be private" error | | Short Link | https://vm.tiktok.com/abc | ✅ Resolves and downloads | 🎨 Customization Ideas Change Language Edit text in Telegram nodes to translate messages: "⏳ Downloading video..." → "⏳ Đang tải video..." Add Video Compression Insert a Compress node between "Download Video File" and "Send Video to User" for smaller files. Store Statistics Add a Google Sheets node after "Extract Video URL" to log: Video URL Author Views/Likes Download timestamp Multi-Platform Support Duplicate the workflow and modify URL validation + extraction logic for Instagram, YouTube Shorts, etc. Rate Limiting Add a Wait node (2 seconds) before "Get TikTok Page HTML" to avoid IP bans. 🐛 Troubleshooting Problem: Bot doesn't respond ✅ Check if workflow is Active ✅ Verify Telegram credentials are correct ✅ Check Executions tab for errors Problem: "Video data not found" error ✅ TikTok may have changed their HTML structure ✅ Update the regex in "Extract Video URL" node ✅ Check if video is actually deleted/private Problem: Download fails ✅ Ensure "On Error" is set to "Continue With Error Output" ✅ Check if your IP is blocked by TikTok (use VPN) ✅ Verify headers in "Download Video File" node Problem: Error messages not appearing ✅ Double-check error output connections (red dots) ✅ Make sure "Format Error" node references correct variables ✅ Test by intentionally breaking a node (invalid URL) 📊 Performance Metrics | Metric | Value | |--------|-------| | Average Processing Time | 5-10 seconds | | Success Rate | ~95% (valid public videos) | | Max Video Size | Limited by Telegram (50MB) | | Concurrent Users | Unlimited (webhook-based) | 🔐 Privacy & Security ✅ No Data Storage: Videos are streamed directly to users, not stored ✅ No Logging: User IDs and links are processed in-memory only ✅ Secure Headers: Mimics browser requests to avoid detection ✅ Error Sanitization: Sensitive data is filtered from error messages 📚 Technical Stack n8n Version**: 1.116.0+ Node Types Used**: telegramTrigger (v1.2) telegram (v1.2) if (v2.2) set (v3.4) httpRequest (v4.2) code (v2) stickyNote (v1) External APIs**: TikTok CDN, Telegram Bot API 🎓 Learning Resources Want to understand the workflow better? Check these concepts: n8n Error Handling Telegram Bot API HTTP Request Headers JavaScript Code Node 🤝 Contributing Found a bug? Have an improvement idea? Test your changes thoroughly Document any new nodes or logic Share your enhanced workflow with the community Credit the original author (see below) 👨💻 About the Author Nguyen Thieu Toan n8n Automation Specialist & Workflow Creator 🌐 Website: nguyenthieutoan.com 📧 Contact: Available on website 🎯 Specialty: Building production-ready n8n workflows for real-world automation > "I create workflows that just work. No fluff, no complexity—just reliable automation that saves time and solves problems." Other Workflows by Nguyen Thieu Toan 🎵 Spotify to YouTube Playlist Converter 📸 Instagram Media Downloader Bot 📊 Multi-Channel Social Media Scheduler 🔄 Automated Content Repurposing Pipeline Visit nguyenthieutoan.com for more automation workflows and tutorials. 📝 License & Attribution This workflow is provided free of charge for personal and commercial use. Required Attribution: When sharing or modifying: Include author name and website link When showcasing: Tag @nguyenthieutoan or link to nguyenthieutoan.com Not Required But Appreciated: Star the workflow on n8n community Share your success story Suggest improvements 🎉 Version History | Version | Date | Changes | |---------|------|---------| | 2.0 | 2025-10-22 | • Added comprehensive error handling• Improved user feedback• Added video statistics• English language support• Enhanced documentation | | 1.0 | 2025-10-21 | • Initial release• Basic download functionality | ⭐ Support This Work If this workflow saved you time: ⭐ Star it on n8n community 📢 Share with fellow automation enthusiasts 💬 Leave feedback on nguyenthieutoan.com ☕ Buy me a coffee (link on website) Happy Automating! 🚀 Last Updated: October 22, 2025 Workflow Name: TikTok Video Downloader (No Watermark) - Telegram Bot Author: Nguyen Thieu Toan
by Ahmed Sherif
AI-Powered Lead Scraping Automation using APIFY Scraper and Gemini Filtering to Google Sheets This is a fully automated, end-to-end pipeline designed to solve the challenge of inconsistent and low-quality lead data from large-scale scraping operations. The system programmatically fetches raw lead information from sources like Apollo or via Apify, processes it through an intelligent validation layer, and delivers a clean, deduplicated, and ready-to-use dataset directly into Google Sheets. By integrating Google Gemini for data cleansing, it moves beyond simple presence checks to enforce data hygiene and standardization, ensuring that sales teams only engage with properly formatted and complete leads. This automation eliminates hours of manual data cleaning, accelerates the speed from lead acquisition to outreach, and significantly improves the integrity of the sales pipeline. Features Batch Processing**: Systematically processes up to 1000 leads per batch and automatically loops through the entire dataset. This ensures stable, memory-efficient operation even with tens of thousands of scraped contacts. AI Validation**: Google Gemini acts as a data quality gatekeeper. It validates the presence and plausible format of critical fields (e.g., First Name, Company Name) and cleanses data by correcting common formatting issues. Smart Deduplication**: Before appending a new lead, the system cross-references its email address against the entire Google Sheet to prevent duplicate entries, ensuring a single source of truth. Auto Lead IDs**: Generates a unique, sequential ID for every new lead in the format AP-DDMMYY-xxxx. This provides a consistent reference key for tracking and CRM integration. Data Quality Reports**: Delivers real-time operational visibility by sending a concise summary to a Telegram channel after each batch, detailing success, warning, and error counts. Rate Limiting**: Incorporates a 30-second delay between batches to respect Google Sheets API limits, preventing throttling and ensuring reliable, uninterrupted execution. How It Works The workflow is initiated by an external trigger, such as a webhook, carrying the raw scraped data payload. It authenticates and fetches the complete list of leads from the Apify or Apollo API endpoint. The full list is automatically partitioned into manageable batches of 1000 leads for efficient processing. Each lead is individually passed to the Gemini AI Agent, which validates that required fields like Name, Email, and Company are present and correctly formatted. Validated leads are assigned a unique Lead ID, and all data fields are standardized for consistency. The system performs a lookup in the target Google Sheet to confirm the lead's email does not already exist. Clean, unique leads are appended as a new row to the designated spreadsheet. A completion notice is sent via the Telegram Bot, summarizing the batch results with clear statistics. Requirements Apify/Apollo API access credentials. Google Cloud project with OAuth2 credentials for Google Sheets API access. A configured Telegram Bot with its API Token and a target Chat ID. A Google Gemini API Key for data validation and cleansing. This system is ideal for sales and marketing operations teams managing high-volume lead generation campaigns, providing automated data quality assurance and accelerating pipeline development.
by Open Paws
🎯 Who's it for ESG analysts, investors, procurement teams, activists and sustainability professionals who need comprehensive, objective assessments of companies' environmental impact and animal welfare policies. Perfect for: Due diligence and investment screening Supplier evaluation and ethical sourcing Compliance reporting and ESG benchmarking Consumer guidance for ethical purchasing decisions ⚡ How it works This workflow automates the entire research and analysis process for comprehensive sustainability and animal welfare assessment. Simply input a company name, and the system handles everything: 🔍 Multi-Source Research: Calls a specialized subworkflow that queries: Open Paws database for animal welfare data Web scraping for sustainability reports Search engines for recent developments Social media monitoring for real-time insights 🤖 Parallel AI Analysis: Two specialized chains process data simultaneously: Structured scoring** with percentages and letter grades (A+ to D) Detailed HTML reports** with narrative analysis and insights 📊 Complete Assessment: Final output combines both formats for actionable intelligence on: Environmental policies and carbon footprint Animal welfare practices and ethical sourcing Vegan accommodation and plant-based initiatives 📋 Requirements Prerequisites: Download the research subworkflow from **Multi-Tool Research Agent for Animal Advocacy with OpenRouter, Serper & Open Paws DB and save it in your n8n instance API key for OpenRouter or other AI service provider 🚀 How to set up Install Research Subworkflow: First download the Multi-Tool Research Agent for Animal Advocacy with OpenRouter, Serper & Open Paws DB and import it into your n8n instance Configure API Keys: Set up your AI service credentials in the LLM nodes Link Subworkflow: Connect the Research Agent node to reference your installed research subworkflow Test Connection: Verify the research tools and databases are accessible Run Test: Input a well-known company name to validate the complete pipeline 🛠️ How to customize the workflow Scoring Weights**: Adjust percentage weightings for environmental impact, animal welfare, and vegan accommodation Research Sources**: Modify the subworkflow to include additional databases or exclude certain sources Output Format**: Customize the HTML report template or JSON schema structure Grading Scale**: Change letter grade thresholds (A+, A, B+, etc.) in the scoring logic Assessment Focus**: Adapt prompts to emphasize specific sustainability or animal welfare aspects for your industry
by Oneclick AI Squad
This n8n workflow receives files sent in a Telegram chat, uploads them to Google Drive, extracts text using OCR (for images and PDFs), and stores the extracted content in Airtable for quick search and retrieval. Users can later search through documents using a Telegram /search command. Key Features Accepts images and documents from Telegram Uploads files to Google Drive automatically Detects file type and runs OCR if eligible Extracts text from images & PDFs via Google Vision Stores file metadata + text in Airtable Search documents using /search command in Telegram Sends result previews and file links Error handling & user notifications included Use Cases Personal document vault with search Team knowledge filing system Receipt & invoice OCR archive Legal documents store & retrieval Research papers & notes indexing Company file inbox for AI knowledge base Workflow Steps | Step | Action | Description | | ---- | --------------------- | ---------------------------------------------------------- | | 1 | Telegram Trigger | Detects incoming docs/images or /search command | | 2 | Filter File or Search | Routes based on whether message has file or search command | | 3 | Extract Metadata | Reads file info such as name, MIME type, user | | 4 | Download File | Downloads file via Telegram API | | 5 | Upload to Drive | Saves file in Google Drive | | 6 | OCR Check | Determines if file supports OCR | | 7 | Google OCR | Runs OCR for images/PDFs | | 8 | Extract Text | Pulls text output from OCR | | 9 | Merge OCR Text | Combines file data + text | | 10 | Save to Airtable | Indexes with metadata + text | | 11 | Success Reply | Sends link + success message | | 12 | /search Flow | Parse search query | | 13 | Airtable Search | Full-text search for records | | 14 | Send Results | Sends matches to Telegram | | 15 | Error Handler | Notifies user on failure | Input Formats File Messages Supported Images PDFs Documents Search Command /search keyword Example: /search invoice Output After Upload: ✅ File saved & indexed successfully! 🔗 Drive Link: <link> After Search: Returns structured result: File name Preview text snippet Google Drive link Data Stored in Airtable | Field | Description | | ------------- | ------------------------- | | File Name | Original name | | File Link | Google Drive link | | MIME Type | File type | | Telegram User | Sender info | | OCR Text | Extracted searchable text | | Uploaded Date | Timestamp | Technical Requirements Telegram Bot Token Google Drive API connection Google Vision API key Airtable API key & table Benefits Automatically organizes Telegram files Makes PDFs & images searchable Saves manual sorting and indexing time AI-ready data storage (future LLM integration) Fast search experience right in Telegram Enhancement Ideas Add Whisper for voice message transcription Add chat GPT summarization for large docs Build dashboard for uploaded files Auto-tag documents (invoice, ID, receipt etc.) Multi-language OCR support Status ✅ Ready for production ✅ Handles images, PDFs, and files ✅ End-to-end automation 🛠 Optional: add more AI enrichment later
by Malte Sohns
Monitor and manage Docker containers from Telegram with AI log analysis This workflow gives you a smart Telegram command center for your homelab. It lets you monitor Docker containers, get alerts the moment something fails, view logs, and restart services remotely. When you request logs, they're automatically analyzed by an LLM so you get a clear, structured breakdown instead of raw terminal output. Who it's for Anyone running a self-hosted environment who wants quick visibility and control without SSHing into a server. Perfect for homelab enthusiasts, self-hosters, and DevOps folks who want a lightweight on-call assistant. What it does Receives container heartbeat alerts via webhook Sends Telegram notifications for status changes or failures Lets you request logs or restart services from chat Analyzes logs with GPT and summarizes them clearly Supports manual “status” and “update all containers” commands Requirements Telegram Bot API credentials SSH access to your Docker host How to set it up Create a Telegram bot and add its token as credentials Enter your server SSH credentials in the SSH node Deploy the workflow and set your webhook endpoint Tailor container names or heartbeat logic to your environment Customize it Swap SSH commands for Kubernetes if you're on k8s Change the AI model to another provider Extend with health checks or auto-healing logic
by SpaGreen Creative
Shopify Auto Send WhatsApp Thank-You Messages & Loyalty Coupon Using Rapiwa API Who is this for? This workflow is for Shopify store owners, marketers, and support teams who want to automatically message their high-value customers on WhatsApp when new discount codes are created. What this workflow does Fetches customer data from Shopify Filters customers where total_spent > 5000 Cleans phone numbers (removes non-digit characters) and normalizes them to an international format Verifies numbers via the Rapiwa API (verify-whatsapp endpoint) Sends coupon or thank-you messages to verified numbers via the Rapiwa send-message endpoint Logs each send attempt to Google Sheets with status and validity Uses batching (SplitInBatches) and Wait nodes to avoid rate limits Key features Automated trigger: Shopify webhook (discounts/create) or manual trigger Targeted sending to high-value customers Pre-send verification to reduce failed sends Google Sheets logging and status updates Rate-limit protection using Wait node #How to use? Step-by-step setup 1) Prepare a Google Sheet Columns: name, number, status, validity, check (optional) Example row: Abdul Mannan | 8801322827799 | not sent | unverified | check 2) Configure n8n credentials Shopify: store access token (X-Shopify-Access-Token) Rapiwa: Bearer token (HTTP Bearer credential) Google Sheets: OAuth2 credentials and sheet access 3) Configure the nodes Webhook/Trigger: Shopify discounts/create or Manual Trigger HTTP Request (Shopify): /admin/api/<version>/customers.json Code node: filter customers total_spent > 5000 and map fields SplitInBatches: batching/looping Code (clean number): waNoStr.replace(/\D/g, "") HTTP Request (Rapiwa verify): POST https://app.rapiwa.com/api/verify-whatsapp body { number } IF node: check data.exists to decide branch HTTP Request (Rapiwa send-message): POST https://app.rapiwa.com/api/send-message body { number, message_type, message } Google Sheets Append/Update: write status and validity Wait: add 2–5 seconds delay between sends 4) Test with a small batch Run manually with 2–5 records first and verify results Google Sheet column structure A Google Sheet formatted like this ➤ Sample | Name | Number | Status | Validity | | -------------- | ------------- | -------- | ---------- | | Abdul Mannan | 8801322827798 | not sent | unverified | | Abdul Mannan | 8801322827799 | sent | verified | Requirements Shopify Admin API access (store access token) Rapiwa account and Bearer token Google account and Google Sheet (OAuth2 setup) n8n instance (nodes used: HTTP Request, Code, SplitInBatches, IF, Google Sheets, Wait) Customization ideas Adjust the filter (e.g., order count, customer tags) Use message templates to insert name and coupon code per customer Add an SMS or email fallback for unverified numbers Send a run summary to admin (Slack / email) Store logs in a database for deeper analysis Important notes data.exists may be a boolean or a string — normalize it in a Code node before using in an IF node Ensure Google Sheets column names match exactly Store Rapiwa and Shopify tokens securely in n8n credentials Start with small batches for testing and scale gradually Useful Links Dashboard:** https://app.rapiwa.com Official Website:** https://rapiwa.com Documentation:** https://docs.rapiwa.com Support & Help WhatsApp**: Chat on WhatsApp Discord**: SpaGreen Community Facebook Group**: SpaGreen Support Website**: https://spagreen.net Developer Portfolio**: Codecanyon SpaGreen
by Zyte
Automated AI Web Scraper This workflow uses the Zyte API to automatically detect and extract structured data from E-commerce sites, Articles, Job Boards, and Search Engine Results (SERP) - no custom CSS selectors required. It features a robust "Two-Phase Architecture" (Crawler + Scraper) that handles pagination loops, error retries, and data aggregation automatically, ensuring you get a clean CSV export even for large sites with thousands of pages. If you prefer to use your own parsing logic and just need raw data, it provides a "Manual Mode" for that capability as well. Supported Modes E-commerce / Product:** Extract prices, images, SKUs, and availability. Articles / News / Forums:** Extract headlines, body text, authors, and dates. Job Boards / Postings:** Extract salaries, locations, and descriptions. SERP (Search Engine Results)**: Extract search rankings, organic results, and snippets. General Scraping:** Get raw BrowserHtml, HTTP Response codes, Network API traffic, or Screenshots to parse yourself. How it works Input:** You enter a URL and choose a goal (e.g., "Scrape all pages") via a user-friendly form. Smart Routing:** A logic engine automatically configures the correct extraction model for the target website. Two-Phase Extraction:** (Active only for "Scrape all pages") Phase 1 maps out all available URLs (Crawling), and Phase 2 extracts the rich data (Scraping), filtering out errors before saving to CSV. Set up steps Get your API Key: You need a free Zyte API key to run the AI extraction. Get it here. Run: Open the Form view, paste your key, select your target website, and hit Submit. Export: The workflow will process the data and output a downloadable CSV file. Resources Zyte API Documentation Get Help (with API errors & extraction logic)