by WeblineIndia
βοΈ Advanced Equipment Health Monitor with MS Teams Integration (n8n | API | Google Sheets | MSTeams) This n8n workflow automatically monitors equipment health by fetching real-time metrics like temperature, voltage and operational status. If any of these parameters cross critical thresholds, an alert is instantly sent to a Microsoft Teams channel and the event is logged in Google Sheets. The workflow runs every 15 minutes by default. β‘ Quick Implementation Steps Import the workflow JSON into your n8n instance. Open the "Set Config" node and update: API endpoint Teams webhook URL Threshold values Google Sheet ID Activate the workflow to start receiving alerts every 15 minutes. π― Whoβs It For Renewable energy site operators (solar, wind) Plant maintenance and operations teams Remote infrastructure monitoring services IoT-integrated energy platforms Enterprise environments using Microsoft Teams π Requirements | Tool | Purpose | |------|---------| | n8n Instance | To run and schedule automation | | HTTP API | Access to your equipment or IoT platform health API | | Microsoft Teams | Incoming Webhook URL configured | | Google Sheets | Logging and analytics | | SMTP (optional) | For email-based alternatives or expansions | π§ What It Does Runs every 15 minutes** to check the latest equipment metrics. Compares values** (temperature, voltage, status) against configured thresholds. Triggers a Microsoft Teams message** when a threshold is breached. Appends the alert data** to a Google Sheet for logging and review. π§© Workflow Components Set Node:** Configures thresholds, endpoints, webhook URL and Sheet ID. Cron Node:** Triggers the check every 15 minutes. HTTP Request Node:** Pulls data from your equipment health monitoring API. IF Node:** Evaluates if conditions are within or outside defined limits. MS Teams Alert Node:** Sends structured alerts using a Teams incoming webhook. Google Sheets Node:** Logs alert details for recordkeeping and analytics. π§ How To Set Up β Step-by-Step Import Workflow: In n8n, click Import and upload the provided .json file. Update Configurations: Open the Set Config node. Replace the placeholder values: apiEndpoint: URL to fetch equipment data. teamsWebhookUrl: Your MS Teams channel webhook. temperatureThreshold: Example = 80 voltageThreshold: Example = 400 googleSheetId: Google Sheet ID (must be shared with n8n service account). Check Webhook Integration: Ensure your MS Teams webhook is properly authorized and points to a live channel. Run & Monitor: Enable the workflow and view logs/alerts. Adjust thresholds as needed. π§ͺ How To Customize | Customization | How | |---------------|-----| | Add more parameters (humidity, pressure) | Extend the HTTP + IF node conditions | | Change alert frequency | Edit the Cron node | | Use Slack or Email instead of Teams | Replace MS Teams node with Slack or Email node | | Add PDF Report Generation | Use HTML β PDF node and email the report | | Export to Database | Add a PostgreSQL or MySQL node instead of Google Sheets | β Addβons (Advanced) | Add-on | Description | |--------|-------------| | π¦ Auto-Ticketing | Auto-create issues in Jira, Trello or ClickUp for serious faults | | π Dashboard Sync | Send real-time logs to BigQuery or InfluxDB | | π§ Predictive Alerts | Use machine learning APIs to flag anomalies | | π Daily Digest | Compile all incidents into a daily summary email or Teams post | | π± Mobile Alert | Integrate Twilio for SMS alerts or WhatsApp notifications | π Example Use Cases Monitor solar inverter health for overheating or voltage drops. Alert field engineers via Teams when a wind turbine sensor fails. Log and visualize hardware issues for weekly analytics. Automate SLA compliance tracking through timely notifications. Ensure distributed infrastructure (e.g., substations) are always in operational range. π§― Troubleshooting Guide | Issue | Possible Cause | Solution | |-------|----------------|----------| | No Teams alert | Incorrect webhook URL or formatting | Recheck the Teams webhook and payload | | Workflow not triggering | Cron node misconfigured | Ensure itβs set to run every 15 mins and workflow is active | | Google Sheet not updating | Sheet ID is wrong or not shared | Share Sheet with your n8n Google service account | | No data from API | Endpoint URL is down or wrong | Test the endpoint manually with Postman or browser | π Need Assistance? Need help tailoring this to your exact equipment type or expanding the workflow? π Contact WeblineIndia β Expert automation partners for renewable energy, infrastructure and enterprise workflows.
by Nguyen Thieu Toan
π¬ TikTok Video Downloader (No Watermark) - Telegram Bot > Download TikTok videos instantly without watermarks via Telegram > Fast, reliable, and user-friendly automated workflow β¨ What This Workflow Does This powerful automation turns your Telegram bot into a TikTok video downloader. Simply send any TikTok link, and the bot will: β Validate the URL automatically β‘ Extract video without watermark π Display video statistics (views, likes, author) π Send the clean video file directly to you No ads. No watermarks. Pure automation magic. π― Key Features | Feature | Description | |---------|-------------| | π Smart Validation | Automatically checks if the link is a valid TikTok URL | | π¬ Real-time Feedback | Keeps users informed with status messages at every step | | β οΈ Error Handling | Catches and explains errors in user-friendly language | | π Video Analytics | Shows author name, view count, and likes | | π₯ High Quality | Downloads original video quality without TikTok watermark | | β‘ Fast Processing | Optimized HTTP requests with proper headers and timeouts | π§ How It Works Workflow Flow Diagram π± User sends TikTok link β β URL Validation ββ Valid β Continue ββ Invalid β Send error message β π¬ Send "Processing..." status β π Fetch TikTok page HTML β π Extract video URL from page data β β¬οΈ Download video file (no watermark) β π€ Send video to user with stats Technical Process Trigger Reception: Telegram webhook receives user message URL Validation: IF node checks for tiktok.com or vm.tiktok.com domains User Feedback: Bot sends "uploading video..." chat action + status message Variable Configuration: Stores chat ID and video URL for later use HTML Fetching: HTTP request to TikTok with browser-like headers Data Extraction: JavaScript code parses UNIVERSAL_DATA_FOR_REHYDRATION JSON Video Download: HTTP request with proper cookies and referrer headers Delivery: Telegram sends video file with formatted caption including stats Error Handling Strategy Each critical node (HTTP requests, code execution) has error output enabled: On Success**: Continues to next processing step On Error**: Routes to "Format Error" β "Send Error Message" path User Experience**: Clear, actionable error messages instead of silent failures π Set Up Steps Prerequisites β n8n instance (v1.116.0 or higher) β Telegram Bot Token (Create via @BotFather) β Basic understanding of n8n workflows Step 1: Import Workflow Copy the workflow JSON In n8n, click "+ Add workflow" β "Import from JSON" Paste the JSON and click "Import" Step 2: Configure Telegram Credentials Click on any Telegram node Select "Create New Credential" in the Credentials dropdown Enter your Bot Token from @BotFather Click "Save" All Telegram nodes will automatically use this credential Step 3: Enable Error Handling β οΈ CRITICAL You MUST manually configure error outputs on these 3 nodes: Node: "Get TikTok Page HTML" Click the node β Settings tab Find "On Error" section Select "Continue With Error Output" Click Save Node: "Extract Video URL" Click the node β Settings tab Set "On Error" to "Continue With Error Output" Click Save Node: "Download Video File" Click the node β Settings tab Set "On Error" to "Continue With Error Output" Click Save > π‘ Why? n8n cannot import error handling settings via JSON. This manual step ensures errors are caught instead of crashing the workflow. Step 4: Activate Workflow Click the "Active" toggle in the top-right corner The workflow is now listening for Telegram messages Step 5: Test Your Bot Open Telegram and find your bot Send a TikTok link like: https://www.tiktok.com/@user/video/123456789 Watch the magic happen! π π§ͺ Testing Scenarios | Test Case | Input | Expected Output | |-----------|-------|----------------| | Valid Video | Working TikTok link | β Video file + stats caption | | Invalid URL | hello world | β "Please send valid TikTok link" | | Deleted Video | Link to deleted video | β "Video data not found" error | | Private Video | Private account video | β "Video may be private" error | | Short Link | https://vm.tiktok.com/abc | β Resolves and downloads | π¨ Customization Ideas Change Language Edit text in Telegram nodes to translate messages: "β³ Downloading video..." β "β³ Δang tαΊ£i video..." Add Video Compression Insert a Compress node between "Download Video File" and "Send Video to User" for smaller files. Store Statistics Add a Google Sheets node after "Extract Video URL" to log: Video URL Author Views/Likes Download timestamp Multi-Platform Support Duplicate the workflow and modify URL validation + extraction logic for Instagram, YouTube Shorts, etc. Rate Limiting Add a Wait node (2 seconds) before "Get TikTok Page HTML" to avoid IP bans. π Troubleshooting Problem: Bot doesn't respond β Check if workflow is Active β Verify Telegram credentials are correct β Check Executions tab for errors Problem: "Video data not found" error β TikTok may have changed their HTML structure β Update the regex in "Extract Video URL" node β Check if video is actually deleted/private Problem: Download fails β Ensure "On Error" is set to "Continue With Error Output" β Check if your IP is blocked by TikTok (use VPN) β Verify headers in "Download Video File" node Problem: Error messages not appearing β Double-check error output connections (red dots) β Make sure "Format Error" node references correct variables β Test by intentionally breaking a node (invalid URL) π Performance Metrics | Metric | Value | |--------|-------| | Average Processing Time | 5-10 seconds | | Success Rate | ~95% (valid public videos) | | Max Video Size | Limited by Telegram (50MB) | | Concurrent Users | Unlimited (webhook-based) | π Privacy & Security β No Data Storage: Videos are streamed directly to users, not stored β No Logging: User IDs and links are processed in-memory only β Secure Headers: Mimics browser requests to avoid detection β Error Sanitization: Sensitive data is filtered from error messages π Technical Stack n8n Version**: 1.116.0+ Node Types Used**: telegramTrigger (v1.2) telegram (v1.2) if (v2.2) set (v3.4) httpRequest (v4.2) code (v2) stickyNote (v1) External APIs**: TikTok CDN, Telegram Bot API π Learning Resources Want to understand the workflow better? Check these concepts: n8n Error Handling Telegram Bot API HTTP Request Headers JavaScript Code Node π€ Contributing Found a bug? Have an improvement idea? Test your changes thoroughly Document any new nodes or logic Share your enhanced workflow with the community Credit the original author (see below) π¨βπ» About the Author Nguyen Thieu Toan n8n Automation Specialist & Workflow Creator π Website: nguyenthieutoan.com π§ Contact: Available on website π― Specialty: Building production-ready n8n workflows for real-world automation > "I create workflows that just work. No fluff, no complexityβjust reliable automation that saves time and solves problems." Other Workflows by Nguyen Thieu Toan π΅ Spotify to YouTube Playlist Converter πΈ Instagram Media Downloader Bot π Multi-Channel Social Media Scheduler π Automated Content Repurposing Pipeline Visit nguyenthieutoan.com for more automation workflows and tutorials. π License & Attribution This workflow is provided free of charge for personal and commercial use. Required Attribution: When sharing or modifying: Include author name and website link When showcasing: Tag @nguyenthieutoan or link to nguyenthieutoan.com Not Required But Appreciated: Star the workflow on n8n community Share your success story Suggest improvements π Version History | Version | Date | Changes | |---------|------|---------| | 2.0 | 2025-10-22 | β’ Added comprehensive error handlingβ’ Improved user feedbackβ’ Added video statisticsβ’ English language supportβ’ Enhanced documentation | | 1.0 | 2025-10-21 | β’ Initial releaseβ’ Basic download functionality | β Support This Work If this workflow saved you time: β Star it on n8n community π’ Share with fellow automation enthusiasts π¬ Leave feedback on nguyenthieutoan.com β Buy me a coffee (link on website) Happy Automating! π Last Updated: October 22, 2025 Workflow Name: TikTok Video Downloader (No Watermark) - Telegram Bot Author: Nguyen Thieu Toan
by Ahmed Sherif
AI-Powered Lead Scraping Automation using APIFY Scraper and Gemini Filtering to Google Sheets This is a fully automated, end-to-end pipeline designed to solve the challenge of inconsistent and low-quality lead data from large-scale scraping operations. The system programmatically fetches raw lead information from sources like Apollo or via Apify, processes it through an intelligent validation layer, and delivers a clean, deduplicated, and ready-to-use dataset directly into Google Sheets. By integrating Google Gemini for data cleansing, it moves beyond simple presence checks to enforce data hygiene and standardization, ensuring that sales teams only engage with properly formatted and complete leads. This automation eliminates hours of manual data cleaning, accelerates the speed from lead acquisition to outreach, and significantly improves the integrity of the sales pipeline. Features Batch Processing**: Systematically processes up to 1000 leads per batch and automatically loops through the entire dataset. This ensures stable, memory-efficient operation even with tens of thousands of scraped contacts. AI Validation**: Google Gemini acts as a data quality gatekeeper. It validates the presence and plausible format of critical fields (e.g., First Name, Company Name) and cleanses data by correcting common formatting issues. Smart Deduplication**: Before appending a new lead, the system cross-references its email address against the entire Google Sheet to prevent duplicate entries, ensuring a single source of truth. Auto Lead IDs**: Generates a unique, sequential ID for every new lead in the format AP-DDMMYY-xxxx. This provides a consistent reference key for tracking and CRM integration. Data Quality Reports**: Delivers real-time operational visibility by sending a concise summary to a Telegram channel after each batch, detailing success, warning, and error counts. Rate Limiting**: Incorporates a 30-second delay between batches to respect Google Sheets API limits, preventing throttling and ensuring reliable, uninterrupted execution. How It Works The workflow is initiated by an external trigger, such as a webhook, carrying the raw scraped data payload. It authenticates and fetches the complete list of leads from the Apify or Apollo API endpoint. The full list is automatically partitioned into manageable batches of 1000 leads for efficient processing. Each lead is individually passed to the Gemini AI Agent, which validates that required fields like Name, Email, and Company are present and correctly formatted. Validated leads are assigned a unique Lead ID, and all data fields are standardized for consistency. The system performs a lookup in the target Google Sheet to confirm the lead's email does not already exist. Clean, unique leads are appended as a new row to the designated spreadsheet. A completion notice is sent via the Telegram Bot, summarizing the batch results with clear statistics. Requirements Apify/Apollo API access credentials. Google Cloud project with OAuth2 credentials for Google Sheets API access. A configured Telegram Bot with its API Token and a target Chat ID. A Google Gemini API Key for data validation and cleansing. This system is ideal for sales and marketing operations teams managing high-volume lead generation campaigns, providing automated data quality assurance and accelerating pipeline development.
by Jitesh Dugar
Automate your entire Instagram carousel publishing pipeline from a single webhook call. This workflow receives a product collection payload, loops through each slide image, uploads every asset via Upload to URL to generate stable public CDN URLs, creates individual Instagram child media containers per slide, assembles them into a parent CAROUSEL container, and publishes the post. Built for product drops, lookbooks, and multi-item announcements. What This Workflow Does Intake and Validation Webhook - Receive Payload** -- accepts a POST request with your full collection data including slide array, caption copy, hashtags, and product details. Configured to respond inline once the full workflow completes. Code - Validate Payload** -- enforces Instagram's carousel constraints before touching any API. Checks that slides is an array of 2 to 10 items, every imageUrl is a publicly accessible HTTPS URL, and required environment variables are present. Automatically trims to 10 slides if the array exceeds the Instagram maximum. Fails fast with a descriptive error message. Code - Build Caption** -- assembles a structured, swipe-optimized caption: hook line, main copy block, numbered product list with optional prices, CTA line, and hashtag block. Safely truncated to Instagram's 2200 character limit. Per-Slide Loop via Split In Batches Split In Batches** -- loops over the slides array one item at a time using batchSize 1. The loop branch feeds into image processing; the done branch feeds into carousel assembly once all slides are processed. HTTP - Fetch Slide Image** -- downloads each slide imageUrl as a binary file using n8n's file response format. Works with any publicly accessible image CDN. Upload to URL** -- mandatory CDN bridge. Instagram's child container endpoint requires a direct public HTTPS image URL per slide. It rejects binary payloads and base64 strings. This node uploads each image binary and returns a stable public URL. Runs once per slide inside the loop. Code - Create Child Container** -- calls the Instagram Graph API to create an individual child media container per slide with is_carousel_item set to true. Returns the child container ID for aggregation. Carousel Assembly and Publishing Code - Aggregate Child IDs** -- collects all child container IDs produced across loop iterations using $input.all(). Joins them as a comma-separated string for the carousel parent API call. Re-attaches the caption and metadata from the earlier Code node via cross-node reference. HTTP - Create Carousel Container** -- POSTs to the Instagram Graph API /media endpoint with media_type CAROUSEL and the full children ID string. Caption is set only on the parent container, not on individual children. Wait 8s** -- processing buffer before the publish call. Slightly longer than single-image posts to account for Instagram validating multiple child assets. HTTP - Publish Carousel** -- calls /media_publish with the carousel container ID. Returns the live Instagram Post ID. HTTP - Fetch Post Metadata** -- retrieves the permalink, timestamp, and media_type for the published post so the response payload contains the direct Instagram URL. Notification and Response Slack - Notify Team** -- sends collection name, slide count, permalink, and publish timestamp to your configured Slack channel. Respond to Webhook** -- returns a structured JSON success payload with mediaId, permalink, slideCount, collectionName, and publishedAt timestamp. Key Features True per-slide loop architecture** -- uses Split In Batches wired back on itself so each image is fetched, uploaded, and registered as a child container independently. No single monolithic Code node handling all slides at once. Upload to URL runs inside the loop** -- each slide binary is uploaded to a CDN before its child container is created. This is the correct sequence the Instagram API requires. Fail-fast validation** -- the validation node checks every constraint before any API call is made. Bad payloads are rejected immediately with actionable error messages. Caption auto-truncation** -- the caption builder hard-truncates at 2200 characters so Instagram never rejects the post body regardless of how long the product list is. Permalink in response** -- the workflow fetches the live post metadata after publishing so the webhook response and Slack notification both include the direct Instagram post URL. What You Will Need Credentials Upload to URL** -- configured in n8n, used once per slide inside the loop Instagram Graph API** -- Business or Creator account access token Slack OAuth2** -- for team notifications Perfect For E-commerce brands** -- product collection drops and new arrival announcements with per-item pricing in the caption Fashion and apparel** -- lookbook carousels where each slide is a separate outfit or piece Agencies** -- managing carousel publishing for multiple clients via a single webhook endpoint with different payloads SaaS and digital products** -- feature showcase carousels triggered from a CMS or internal tool Content teams** -- any workflow where carousel content is generated programmatically and needs to be published without manual intervention
by Oneclick AI Squad
This n8n workflow receives files sent in a Telegram chat, uploads them to Google Drive, extracts text using OCR (for images and PDFs), and stores the extracted content in Airtable for quick search and retrieval. Users can later search through documents using a Telegram /search command. Key Features Accepts images and documents from Telegram Uploads files to Google Drive automatically Detects file type and runs OCR if eligible Extracts text from images & PDFs via Google Vision Stores file metadata + text in Airtable Search documents using /search command in Telegram Sends result previews and file links Error handling & user notifications included Use Cases Personal document vault with search Team knowledge filing system Receipt & invoice OCR archive Legal documents store & retrieval Research papers & notes indexing Company file inbox for AI knowledge base Workflow Steps | Step | Action | Description | | ---- | --------------------- | ---------------------------------------------------------- | | 1 | Telegram Trigger | Detects incoming docs/images or /search command | | 2 | Filter File or Search | Routes based on whether message has file or search command | | 3 | Extract Metadata | Reads file info such as name, MIME type, user | | 4 | Download File | Downloads file via Telegram API | | 5 | Upload to Drive | Saves file in Google Drive | | 6 | OCR Check | Determines if file supports OCR | | 7 | Google OCR | Runs OCR for images/PDFs | | 8 | Extract Text | Pulls text output from OCR | | 9 | Merge OCR Text | Combines file data + text | | 10 | Save to Airtable | Indexes with metadata + text | | 11 | Success Reply | Sends link + success message | | 12 | /search Flow | Parse search query | | 13 | Airtable Search | Full-text search for records | | 14 | Send Results | Sends matches to Telegram | | 15 | Error Handler | Notifies user on failure | Input Formats File Messages Supported Images PDFs Documents Search Command /search keyword Example: /search invoice Output After Upload: β File saved & indexed successfully! π Drive Link: <link> After Search: Returns structured result: File name Preview text snippet Google Drive link Data Stored in Airtable | Field | Description | | ------------- | ------------------------- | | File Name | Original name | | File Link | Google Drive link | | MIME Type | File type | | Telegram User | Sender info | | OCR Text | Extracted searchable text | | Uploaded Date | Timestamp | Technical Requirements Telegram Bot Token Google Drive API connection Google Vision API key Airtable API key & table Benefits Automatically organizes Telegram files Makes PDFs & images searchable Saves manual sorting and indexing time AI-ready data storage (future LLM integration) Fast search experience right in Telegram Enhancement Ideas Add Whisper for voice message transcription Add chat GPT summarization for large docs Build dashboard for uploaded files Auto-tag documents (invoice, ID, receipt etc.) Multi-language OCR support Status β Ready for production β Handles images, PDFs, and files β End-to-end automation π Optional: add more AI enrichment later
by Open Paws
π― Who's it for ESG analysts, investors, procurement teams, activists and sustainability professionals who need comprehensive, objective assessments of companies' environmental impact and animal welfare policies. Perfect for: Due diligence and investment screening Supplier evaluation and ethical sourcing Compliance reporting and ESG benchmarking Consumer guidance for ethical purchasing decisions β‘ How it works This workflow automates the entire research and analysis process for comprehensive sustainability and animal welfare assessment. Simply input a company name, and the system handles everything: π Multi-Source Research: Calls a specialized subworkflow that queries: Open Paws database for animal welfare data Web scraping for sustainability reports Search engines for recent developments Social media monitoring for real-time insights π€ Parallel AI Analysis: Two specialized chains process data simultaneously: Structured scoring** with percentages and letter grades (A+ to D) Detailed HTML reports** with narrative analysis and insights π Complete Assessment: Final output combines both formats for actionable intelligence on: Environmental policies and carbon footprint Animal welfare practices and ethical sourcing Vegan accommodation and plant-based initiatives π Requirements Prerequisites: Download the research subworkflow from **Multi-Tool Research Agent for Animal Advocacy with OpenRouter, Serper & Open Paws DB and save it in your n8n instance API key for OpenRouter or other AI service provider π How to set up Install Research Subworkflow: First download the Multi-Tool Research Agent for Animal Advocacy with OpenRouter, Serper & Open Paws DB and import it into your n8n instance Configure API Keys: Set up your AI service credentials in the LLM nodes Link Subworkflow: Connect the Research Agent node to reference your installed research subworkflow Test Connection: Verify the research tools and databases are accessible Run Test: Input a well-known company name to validate the complete pipeline π οΈ How to customize the workflow Scoring Weights**: Adjust percentage weightings for environmental impact, animal welfare, and vegan accommodation Research Sources**: Modify the subworkflow to include additional databases or exclude certain sources Output Format**: Customize the HTML report template or JSON schema structure Grading Scale**: Change letter grade thresholds (A+, A, B+, etc.) in the scoring logic Assessment Focus**: Adapt prompts to emphasize specific sustainability or animal welfare aspects for your industry
by Malte Sohns
Monitor and manage Docker containers from Telegram with AI log analysis This workflow gives you a smart Telegram command center for your homelab. It lets you monitor Docker containers, get alerts the moment something fails, view logs, and restart services remotely. When you request logs, they're automatically analyzed by an LLM so you get a clear, structured breakdown instead of raw terminal output. Who it's for Anyone running a self-hosted environment who wants quick visibility and control without SSHing into a server. Perfect for homelab enthusiasts, self-hosters, and DevOps folks who want a lightweight on-call assistant. What it does Receives container heartbeat alerts via webhook Sends Telegram notifications for status changes or failures Lets you request logs or restart services from chat Analyzes logs with GPT and summarizes them clearly Supports manual βstatusβ and βupdate all containersβ commands Requirements Telegram Bot API credentials SSH access to your Docker host How to set it up Create a Telegram bot and add its token as credentials Enter your server SSH credentials in the SSH node Deploy the workflow and set your webhook endpoint Tailor container names or heartbeat logic to your environment Customize it Swap SSH commands for Kubernetes if you're on k8s Change the AI model to another provider Extend with health checks or auto-healing logic
by SpaGreen Creative
Shopify Auto Send WhatsApp Thank-You Messages & Loyalty Coupon Using Rapiwa API Who is this for? This workflow is for Shopify store owners, marketers, and support teams who want to automatically message their high-value customers on WhatsApp when new discount codes are created. What this workflow does Fetches customer data from Shopify Filters customers where total_spent > 5000 Cleans phone numbers (removes non-digit characters) and normalizes them to an international format Verifies numbers via the Rapiwa API (verify-whatsapp endpoint) Sends coupon or thank-you messages to verified numbers via the Rapiwa send-message endpoint Logs each send attempt to Google Sheets with status and validity Uses batching (SplitInBatches) and Wait nodes to avoid rate limits Key features Automated trigger: Shopify webhook (discounts/create) or manual trigger Targeted sending to high-value customers Pre-send verification to reduce failed sends Google Sheets logging and status updates Rate-limit protection using Wait node #How to use? Step-by-step setup 1) Prepare a Google Sheet Columns: name, number, status, validity, check (optional) Example row: Abdul Mannan | 8801322827799 | not sent | unverified | check 2) Configure n8n credentials Shopify: store access token (X-Shopify-Access-Token) Rapiwa: Bearer token (HTTP Bearer credential) Google Sheets: OAuth2 credentials and sheet access 3) Configure the nodes Webhook/Trigger: Shopify discounts/create or Manual Trigger HTTP Request (Shopify): /admin/api/<version>/customers.json Code node: filter customers total_spent > 5000 and map fields SplitInBatches: batching/looping Code (clean number): waNoStr.replace(/\D/g, "") HTTP Request (Rapiwa verify): POST https://app.rapiwa.com/api/verify-whatsapp body { number } IF node: check data.exists to decide branch HTTP Request (Rapiwa send-message): POST https://app.rapiwa.com/api/send-message body { number, message_type, message } Google Sheets Append/Update: write status and validity Wait: add 2β5 seconds delay between sends 4) Test with a small batch Run manually with 2β5 records first and verify results Google Sheet column structure A Google Sheet formatted like this β€ Sample | Name | Number | Status | Validity | | -------------- | ------------- | -------- | ---------- | | Abdul Mannan | 8801322827798 | not sent | unverified | | Abdul Mannan | 8801322827799 | sent | verified | Requirements Shopify Admin API access (store access token) Rapiwa account and Bearer token Google account and Google Sheet (OAuth2 setup) n8n instance (nodes used: HTTP Request, Code, SplitInBatches, IF, Google Sheets, Wait) Customization ideas Adjust the filter (e.g., order count, customer tags) Use message templates to insert name and coupon code per customer Add an SMS or email fallback for unverified numbers Send a run summary to admin (Slack / email) Store logs in a database for deeper analysis Important notes data.exists may be a boolean or a string β normalize it in a Code node before using in an IF node Ensure Google Sheets column names match exactly Store Rapiwa and Shopify tokens securely in n8n credentials Start with small batches for testing and scale gradually Useful Links Dashboard:** https://app.rapiwa.com Official Website:** https://rapiwa.com Documentation:** https://docs.rapiwa.com Support & Help WhatsApp**: Chat on WhatsApp Discord**: SpaGreen Community Facebook Group**: SpaGreen Support Website**: https://spagreen.net Developer Portfolio**: Codecanyon SpaGreen
by Zyte
Automated AI Web Scraper This workflow uses the Zyte API to automatically detect and extract structured data from E-commerce sites, Articles, Job Boards, and Search Engine Results (SERP) - no custom CSS selectors required. It features a robust "Two-Phase Architecture" (Crawler + Scraper) that handles pagination loops, error retries, and data aggregation automatically, ensuring you get a clean CSV export even for large sites with thousands of pages. If you prefer to use your own parsing logic and just need raw data, it provides a "Manual Mode" for that capability as well. Supported Modes E-commerce / Product:** Extract prices, images, SKUs, and availability. Articles / News / Forums:** Extract headlines, body text, authors, and dates. Job Boards / Postings:** Extract salaries, locations, and descriptions. SERP (Search Engine Results)**: Extract search rankings, organic results, and snippets. General Scraping:** Get raw BrowserHtml, HTTP Response codes, Network API traffic, or Screenshots to parse yourself. How it works Input:** You enter a URL and choose a goal (e.g., "Scrape all pages") via a user-friendly form. Smart Routing:** A logic engine automatically configures the correct extraction model for the target website. Two-Phase Extraction:** (Active only for "Scrape all pages") Phase 1 maps out all available URLs (Crawling), and Phase 2 extracts the rich data (Scraping), filtering out errors before saving to CSV. Set up steps Get your API Key: You need a free Zyte API key to run the AI extraction. Get it here. Run: Open the Form view, paste your key, select your target website, and hit Submit. Export: The workflow will process the data and output a downloadable CSV file. Resources Zyte API Documentation Get Help (with API errors & extraction logic)
by Adil Khan
π€ Automated AI content researcher: YouTube & RSS to Notion via Gemini This workflow is basically my shortcut for researching popular topics to create podcast content on without spending all day on YouTube, scrolling X and reading websites. It grabs the latest videos and articles from the last 24 hours, runs them through Gemini Pro (with SerpAPI to double-check inputs), and then spits out actual content hooks and thumbnail ideas. Everything gets cleaned up into a neat Notion database so I can just log in, see what's hot, and start working on the podcast. What you need To use this workflow, you need credentials for the following services: Google Gemini API: (Free tier available via Google AI Studio) for the LLM. SerpAPI: To allow the AI Agent to browse live Google search results. Google Sheets: To maintain your list of target RSS feeds. Notion: For storing the researched content. Telegram: For mobile notifications. Resources Video Tutorial: https://youtu.be/IoHayi68Ckk Created by: Scalo Labs
by Yar Malik (Asfandyar)
Whoβs it for This workflow is designed for researchers, content creators, and AI agents who need to quickly scrape structured web data and capture full-page screenshots for further use. Itβs especially useful for automating competitive research, news monitoring, and content curation. How it works The workflow uses the Firecrawl API integrated with n8n to perform web searches and return results in structured formats (Markdown and screenshots). It includes: A search agent that transforms natural language queries into Firecrawl-compatible search strings. HTTP requests to retrieve results from specific sites (e.g., YouTube, news outlets) or across the web. Automatic capture of full-page screenshots alongside structured text. Integration with the OpenAI Chat Model for enhanced query handling. How to set up Import this workflow into your n8n instance. Add and configure your Firecrawl API credentials. Add your OpenAI credentials for natural language query parsing. Trigger the workflow via the included chat input or modify it to run on schedule. Requirements A Firecrawl account with an active API key. n8n self-hosted or cloud instance. OpenAI account if you want to enhance search queries. How to customize the workflow Update the search queries to focus on your preferred sites or keywords. Adjust the number of results with the limit parameter. Extend the workflow to store screenshots in Google Drive, Notion, or your database. Replace the chat trigger with any other event trigger (webhook, schedule, etc.).
by Felix
What This Workflow Does Upload a document (PDF, PNG, JPEG) via a web form and let easybits Extractor classify it into one of your defined categories. Based on the classification result and a confidence score, the document is automatically sorted into the correct Google Drive folder. Low-confidence or unrecognized documents are flagged for manual review via Slack. How It Works User uploads a file through the hosted web form The binary file is converted to base64 and sent to easybits easybits returns a document_type and confidence_score The classification result is merged with the original file binary If confidence > 0.5 β routed to the matching Google Drive folder If confidence β€ 0.5 or no category match β uploaded to Needs Review folder + Slack alert Supported categories: medical_invoice Β· restaurant_invoice Β· hotel_invoice Β· trades_invoice Β· telecom_invoice Setup Guide 1. Set Up Your easybits Extractor Pipeline Go to extractor.easybits.tech and create a new pipeline Add two fields to the mapping: document_class and confidence_score In each field's description, paste the corresponding classification or confidence prompt that tells the model how to analyze the document The classification prompt should return exactly one category label β or null if uncertain The confidence prompt should return a decimal number between 0.0 and 1.0 Save & test the pipeline, then copy your Pipeline ID and API Key 2. Set Up Google Drive Create a folder in Google Drive for each category: Medical, Restaurant, Hotel, Trades, Telecom, and Needs Review In n8n, go to Settings β Credentials and create a Google Drive OAuth2 credential This requires a Client ID and Client Secret from the Google Cloud Console (APIs & Services β Credentials β OAuth 2.0 Client ID) Make sure the Google Drive API is enabled in your Google Cloud project Open each of the 6 Google Drive upload nodes in this workflow and select the correct target folder 3. Set Up Slack In n8n, go to Settings β Credentials and create a Slack API credential You'll need a Slack Bot Token β create a Slack App at api.slack.com/apps, add the chat:write scope, and install it to your workspace Create a channel for review notifications (e.g. #n8n-invoice-review) Invite the bot to that channel Open the Review Message node and select the correct channel 4. Connect the easybits Node Open the easybits Extractor (Classification) node Replace the pipeline URL with your own: https://extractor.easybits.tech/api/pipelines/YOUR_PIPELINE_ID Create a Bearer Auth credential using your easybits API Key and assign it to the node 5. Activate & Test Click Active in the top-right corner of n8n Open the form URL and upload a test document Check the execution log to verify the classification result Confirm the file lands in the correct Google Drive folder Test with an unrecognized document to verify the Slack notification fires