by Noman Mohammad
How it Works This workflow builds a free lead generation system that scrapes emails from Google Maps listings and exports them directly into Google Sheets. It’s built in n8n using HTTP requests and JavaScript—no paid APIs required. Here’s what it does at a high level: 🔎 Scrapes business listings from Google Maps based on search queries (e.g., “Miami lawyers”) 🌐 Extracts real business website URLs using regex filtering 📧 Finds and validates email addresses from each website 🧹 Cleans data by removing duplicates and invalid entries 📊 Exports clean email lists into Google Sheets automatically Set Up Steps Estimated setup time: 1–2 hours Create a Google Sheet with two tabs: searches → add your search queries (e.g., “Calgary dentist”) emails → results will be stored here automatically Connect Google Sheets credentials in n8n Update your Google Sheet document ID in the workflow nodes Test with small batches first, then scale up 🚀 Get More Resources & Advanced Workflows For additional resources, advanced automation tutorials, and business strategies that help you generate more leads and grow your agency, check out my website: 👉 Noman Mohammad You’ll find downloads, guides, and proven systems used by successful marketers and entrepreneurs.
by Rapiwa
Who is this for? This workflow is designed for online store owners, customer-success teams, and marketing operators who want to automatically verify customers' WhatsApp numbers and deliver order updates or invoice links via WhatsApp. It is built around WooCommerce order WooCommerce Trigger (order.updated) but is easily adaptable to Shopify or other platforms that provide billing and line_items in the WooCommerce Trigger payload. What this Workflow Does / Key Features Listens for WooCommerce order events (example: order.updated) via a Webhook or a WooCommerce trigger. Filters only orders with status "completed" and maps the payload into a normalized object: { data: { customer, products, invoice_link } } using the Code node Order Completed check. Iterates over line items using SplitInBatches to control throughput. Cleans phone numbers (Clean WhatsApp Number code node) by removing all non-digit characters. Verifies whether the cleaned phone number is registered on WhatsApp using Rapiwa's verify endpoint (POST https://app.rapiwa.com/api/verify-whatsapp). If verified, sends a templated WhatsApp message via Rapiwa (POST https://app.rapiwa.com/api/send-message). Appends an audit row to a "Verified & Sent" Google Sheet for successful sends, or to an "Unverified & Not Sent" sheet for unverified numbers. Uses Wait and batching to throttle requests and avoid API rate limits. Requirements HTTP Bearer credential for Rapiwa (example name in flow: Rapiwa Bearer Auth). WooCommerce API credential for the trigger (example: WooCommerce (get customer)) Running n8n instance with nodes: WooCommerce Trigger, Code, SplitInBatches, HTTP Request, IF, Google Sheets, Wait. Rapiwa account and a valid Bearer token. Google account with Sheets access and OAuth2 credentials configured in n8n. WooCommerce store (or any WooCommerce Trigger source) that provides billing and line_items in the payload. How to Use — step-by-step Setup 1) Credentials Rapiwa: Create an HTTP Bearer credential in n8n and paste your token (flow example name: Rapiwa Bearer Auth). Google Sheets: Add an OAuth2 credential (flow example name: Google Sheets). WooCommerce: Add the WooCommerce API credential or configure a Webhook on your store. 3) Configure Google Sheets The exported flow uses spreadsheet ID: 1S3RtGt5xxxxxxxXmQi_s (Sheet gid=0) as an example. Replace with your spreadsheet ID and sheet gid. Ensure your sheet column headers exactly match the mapping keys listed below (case and trailing spaces must match or be corrected in the mapping). 5) Verify HTTP Request nodes Verify endpoint: POST https://app.rapiwa.com/api/verify-whatsapp — sends { number } (uses HTTP Bearer credential). Send endpoint: POST https://app.rapiwa.com/api/send-message — sends number, message_type=text, and a templated message that uses fields from the Clean WhatsApp Number output. Google Sheet Column Structure The Google Sheets nodes in the flow append rows with these column keys. Make sure the spreadsheet headers A Google Sheet formatted like this ➤ sample | Name | Number | Email | Address | Product Title | Product ID | Total Price | Invoice Link | Delivery Status | Validity | Status | |----------------|---------------|-------------------|------------------------------------------|-----------------------------|------------|---------------|--------------------------------------------------------------------------------------------------------------|-----------------|------------|-----------| | Abdul Mannan | 8801322827799 | contact@spagreen.net | mirpur dohs | Air force 1 Fossil 1:1 - 44 | 238 | BDT 5500.00 | Invoice link | completed | verified | sent | | Abdul Mannan | 8801322827799 | contact@spagreen.net | mirpur dohs h#1168 rd#10 av#10 mirpur dohs dhaka | Air force 1 Fossil 1:1 - 44 | 238 | BDT 5500.00 | Invoice link | completed | unverified | not sent | Important Notes Do not hard-code API keys or tokens; always use n8n credentials. Google Sheets column header names must match the mapping keys used in the nodes. Trailing spaces are common accidental problems — trim them in the spreadsheet or adjust the mapping. The IF node in the exported flow compares to the string "true". If the verify endpoint returns boolean true/false, convert to string or boolean consistently before the IF. Message templates in the flow reference $('Clean WhatsApp Number').item.json.data.products[0] — update templates if you need multiple-product support. Useful Links Dashboard:** https://app.rapiwa.com Official Website:** https://rapiwa.com Documentation:** https://docs.rapiwa.com Support & Help WhatsApp**: Chat on WhatsApp Discord**: SpaGreen Community Facebook Group**: SpaGreen Support Website**: https://spagreen.net Developer Portfolio**: Codecanyon SpaGreen
by Oneclick AI Squad
This n8n workflow automates airline customer support by classifying travel-related questions, fetching relevant information, generating AI answers, and delivering structured responses to users. It ensures accurate travel information delivery, tracks user satisfaction, and logs interactions for future insights — reducing manual support load and improving customer experience. Key Features Allows users to ask airline/travel questions directly through chat via webhook integration. Automatically classifies questions into categories like baggage, refunds, visas, bookings, and travel info. Fetches verified travel knowledge and generates responses using AI. Performs satisfaction check and offers human support if needed. Logs all conversations and system responses for analytics and support auditing. Workflow Process The Webhook Entry Point node receives passenger questions from chat/website (e.g., WhatsApp, web chat widget, or API). The Data Extraction & Cleaning node formats the user query by removing noise and structuring text. The Question Categorization node uses AI to classify the inquiry (e.g., baggage policy, cancellation rules, destination info). The Category Parsing node routes the query to the appropriate context source or knowledge logic. The Knowledge Retrieval node fetches verified travel or airline-specific information. The AI Response Generator node produces a natural, accurate customer-facing reply using the retrieved context. The Response Formatting node adds clarity, structured bullet points, links, and travel guidance tips. The Satisfaction Check node asks if the user is happy with the answer and branches: If satisfied → continue to logging If not satisfied → send request to human support channel The Human Escalation Path node hands unresolved queries to human support teams. The Interaction Logger node stores conversation data (question, category, AI response, feedback status) in a database. The Final Delivery node sends the formatted response back to the user chat channel. Setup Instructions Import the workflow into n8n and configure the Webhook Entry Point with your chat platform or airline support portal. Add OpenAI API credentials in the AI Response Generator and Categorization nodes. Set up your Knowledge Retrieval source (e.g., internal travel database, API, or curated knowledge file). Connect a database (e.g., PostgreSQL, MySQL, Supabase, MongoDB) to store conversation logs and user behavior. Configure optional human support integration (Slack, email, CRM, or support desk tool). Test the workflow by sending sample airline queries (e.g., “Baggage limit to Dubai?” or “How to reschedule my flight?”). Prerequisites n8n instance with webhook, AI, and database nodes enabled. OpenAI API key for AI classification and response generation. Airline or travel knowledge source (API or internal knowledge base). Database connection for logging queries and satisfaction responses. Customer chat channel setup (WhatsApp, website widget, CRM integration, or Telegram bot). Modification Options Enhance the Knowledge Retrieval step to pull real-time data from flight APIs, visa APIs, or airline portals. Add language translation to support global passengers. Extend Satisfaction Logic to auto-escalate urgent cases (e.g., flight delays, lost baggage complaints). Build self-service functions like: Flight status lookup Refund eligibility checker Visa requirement assistant Customize the Response Formatting to include buttons/links (e.g., check baggage rules, contact support). Explore More AI Travel Workflows: Get in touch with us for custom airline automation!
by Joel Gamble
This workflow pulls news articles from NewsAPI, Mediastack, and CurrentsAPI on a scheduled basis. Each provider’s results are normalized into a consistent schema, then written into your database (NocoDB by default). Use case: automated aggregation of categorized news for content pipelines, research agents, or editorial queues. What You Must Update Before Running 1. API Keys Replace all placeholder keys: call newsapi.org - Top Headlines → update API_KEY in URL call newsapi.org - categories → update API_KEY call mediastack → update "ACCESS_KEY" in JSON call currentsapi → update "API_KEY" param 2. Database Connection Workflow uses NocoDB to store results. You must: Update the NocoDB API Token credential to your own Ensure your table includes the fields used in the create operations (source_category, title, summary, author, sources, content, images, publisher_date, etc.) If you prefer Google Sheets, Airtable, or another DB: Replace each NocoDB node with your equivalent “create row” operation The Set nodes already provide all normalized fields you need 3. Scheduling All schedulers are disabled by default. Enable the following so the workflow runs automatically: NewsAPI – Top Headlines** NewsAPI – Categories** Mediastack** CurrentsAPI** You may change the run times, but all four must be scheduled for the workflow to function as designed. What You Can Configure 1. Categories Defined in: newsapi.org categories mediastack categories Edit these arrays to pull only the categories you care about or to match your API plan limits. 2. Article Limits Adjust article_limit in: newsapi.org categories mediastack categories currentsapi config
by R4wd0G
Who’s it for Teams that manage tasks in ClickUp and want those tasks reflected—and kept in sync—in Google Calendar automatically. How it works A ClickUp Trigger captures task events (create, update, delete). For new tasks, the workflow creates a Google Calendar event with the correct start/end. It stores a mapping between clickupTaskId and calendarEventId in a Google Sheet so later updates and deletions can target the right event. Multiple lanes (personal/school/tech/internship) let you route tasks to different calendars. How to set up Assign ClickUp OAuth, Google Calendar, and Google Sheets credentials to the nodes. Open the Configuration node and fill: calendarId_* for each lane sheetId and sheetTabName for the mapping sheet (optional) clickupTeamId Enable the ClickUp Trigger and run a manual test to validate mapping creation and event syncing. Requirements ClickUp workspace with OAuth permissions Google Calendar & Sheets access A Google Sheet for the event↔task mapping How to customize the workflow Edit the calendar routing in Edit Fields nodes or point them to different calendarId_* values. Adjust event colors/fields in Google Calendar nodes. Extend the mapping sheet with extra columns (e.g., status, labels) as needed.
by Jimleuk
Working with Large Documents In Your VLM OCR Workflow Document workflows are popular ways to use AI but what happens when your document is too large for your app or your AI to handle? Whether its context window or application memory that's grinding to a halt, Subworkflow.ai is one approach to keep you going. > Subworkflow.ai is a third party API service to help AI developers work with documents too large for context windows and runtime memory. Prequisites You'll need a Subworkflow.ai API key to use the Subworkflow.ai service. Add the API key as a header auth credential. More details in the official docs https://docs.subworkflow.ai/category/api-reference How it Works Import your document into your n8n workflow Upload it to the Subworkflow.ai service via the Extract API using the HTTP node. This endpoint takes files up to 100mb. Once uploaded, this will trigger an Extract job on the service's side and the response is a "job" record to track progress. Poll Subworkflow.ai's Jobs endpoint and keep polling until the job is finished. You can use the "IF" node looping back unto itself to achieve this in n8n. Once the job is done, the Dataset of the uploaded document is ready for retrieval. Use the Datasets and DatasetItems API to retrieve whatever you need to complete your AI task. In this example, all pages are retrieved and run through a multimodal LLM to parse into markdown. A well-known process when parsing data tables or graphics are required. How to use Integrate Subworkflow's Extract API seemlessly into your existing document workflows to support larger documents from 100mb+ to up to 5000 pages. Customising the workflow Sometimes you don't want the entire document back especially if the document is quite large (think 500+ pages!), instead, use query parameters on the DatasetItems API to pick individual pages or a range of pages to reduce the load. Need Help? Official API documentation**: https://docs.subworkflow.ai/category/api-reference Join the discord**: https://discord.gg/RCHeCPJnYw
by Evoort Solutions
Analyze Webpages with Landing Page Analyzer AI & Generate Google Docs Reports (CRO) Description This workflow integrates the Landing Page Analyzer AI to automatically audit landing pages, format the insights into a conversion-focused report, and save it directly into Google Docs. It leverages the Landing Page Analyzer AIto grade your page, highlight strengths, and suggest improvements—all without manual steps. Nodes Explanation On form submission Captures the URL of the landing page entered by the user to trigger the workflow. Serves as the entry point to pass the URL to the Landing Page Analyzer AI. WebPage Analyzer (API Call via RapidAPI) Sends the URL to the Landing Page Analyzer AI for audit data. Retrieves key analytics: grade, score, suggestions, strengths, and conversion metrics. Reformat (Code Node) Converts the raw JSON from the Landing Page Analyzer AI into structured Markdown. Builds sections for grade, overall score, suggestions, strengths, and score breakdown. Upload In Google Docs Inserts the formatted Markdown report into a predefined Google Document. Ensures the audit output from the Landing Page Analyzer AI is saved and shareable. Benefits of This Workflow Hands-Free Audits: Automatically performs a landing page evaluation using the powerful **Landing Page Analyzer AI. Consistent, Professional Reports**: Standardized Markdown formatting ensures clarity and readability. Effortless Documentation**: Results are directly stored in Google Docs—no manual copying required. Scalable & Repeatable**: Ideal for continuous optimization across multiple pages or campaigns. Use Cases SEO & CRO Agencies: Quickly generate conversion audit reports using the **Landing Page Analyzer AI to optimize client landing pages at scale. Marketing Teams**: Automate weekly or campaign-based auditing of landing pages, with results logged in Google Docs for easy sharing and review. Freelancers & Consultants: Deliver polished, data-driven conversion reports to clients—powered by **Landing Page Analyzer AI via RapidAPI—without repetitive manual work. Growth Hackers & Product Managers**: Monitor iterations of landing pages over time; each version can be audited automatically and archived in Docs for comparison. 🔐 How to Get Your API Key for the Landing Page Analyzer AI API Go to 👉 Landing Page Analyzer AI Click "Subscribe to Test" (you may need to sign up or log in). Choose a pricing plan (there’s a free tier for testing). After subscribing, click on the "Endpoints" tab. Your API Key will be visible in the "x-rapidapi-key" header. 🔑 Copy and paste this key into the httpRequest node in your workflow. Conclusion This n8n workflow streamlines landing page optimization by leveraging the Landing Page Analyzer AI, transforming raw audit output into insightful, presentation-ready reports in Google Docs. Perfect for teams and individuals focused on data-driven improvements, scalability, and efficiency. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n
by Evoort Solutions
Automate YouTube Channel Metadata Extraction to Google Docs Description: This workflow leverages the powerful YouTube Metadata API to automatically extract detailed metadata from any YouTube channel URL. Using the YouTube Metadata API, it collects information like subscribers, views, keywords, and banners, reformats it for readability, and saves it directly to Google Docs for easy sharing and record-keeping. Ideal for marketers, content creators, and analysts looking to streamline YouTube channel data collection. By integrating the YouTube Metadata, this workflow ensures accurate and up-to-date channel insights fetched instantly from the source. Node-by-Node Explanation 1. On form submission Triggers the workflow when a user submits a YouTube channel URL via a web form, starting the metadata extraction process. 2. YouTube Channel Metadata (HTTP Request) Calls the YouTube Metadata API with the provided channel URL to retrieve comprehensive channel details like title, subscriber count, and banner images. 3. Reformat (Code) Transforms the raw API response into a clean, formatted string with emojis and markdown styling for easy reading and better presentation. 4. Add Data in Google Docs Appends the formatted channel metadata into a specified Google Docs document, providing a centralized and accessible record of the data. Benefits of This Workflow Automated Data Collection:* Eliminates manual effort by automatically extracting YouTube channel data via the *YouTube Metadata API**. Accurate & Reliable:** Ensures data accuracy by using a trusted API source, keeping metadata current. Improved Organization:** Saves data in Google Docs, allowing for easy sharing, editing, and collaboration. User-Friendly:** Simple form-based trigger lets anyone gather channel info without technical knowledge. Scalable & Flexible:** Can process multiple URLs easily, perfect for marketing or research teams handling numerous channels. Use Cases Marketing Teams:** Track competitor YouTube channel stats and trends for strategic planning. Content Creators:** Monitor channel growth metrics and optimize content strategy accordingly. Researchers:** Collect and analyze YouTube channel data for academic or market research projects. Social Media Managers:** Automate reporting by documenting channel performance metrics in Google Docs. Businesses:** Maintain up-to-date records of brand or partner YouTube channels efficiently. By leveraging the YouTube Metadata, this workflow provides an efficient, scalable solution to extract and document YouTube channel metadata with minimal manual input. 🔑 How to Get Your API Key for YouTube Metadata API Visit the API Page: Go to the YouTube Metadata on RapidAPI. Sign Up/Login: Create an account or log in if you already have one. Subscribe to the API: Click "Subscribe to Test" and choose a plan (free or paid). Copy Your API Key: After subscribing, your API Key will be available in the "X-RapidAPI-Key" section under "Endpoints". Use the Key: Include the key in your API requests like this: -H "X-RapidAPI-Key: YOUR_API_KEY" Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n
by Evoort Solutions
SEO On Page API – Complete Guide, Use Cases & Benefits The SEO On Page API is a powerful tool for keyword research, competitor analysis, backlink insights, and overall SEO optimization. With multiple endpoints, you can instantly gather actionable SEO data without juggling multiple tools. You can explore and subscribe via SEO On Page API. 📌 Description The SEO On Page API on SEO On Page API allows you to quickly analyze websites, keywords, backlinks, and competitors — all in one place. Ideal for SEO professionals, marketers, and developers who want fast, accurate, and easy-to-integrate data. Node-by-node Overview On form submission — Shows a web form (field: website) and triggers the workflow on submit. Global Storage — Copies website (and optional country) into the execution JSON for reuse. Website Traffic Cheker — POSTs website to webtraffic.php (RapidAPI) to fetch traffic summary. Re-Format — Extracts data.semrushAPI.trafficSummary[0] from the traffic API response. Website Traffic — Appends traffic metrics (visits, users, bounce, etc.) to the "WebSite Traffic" sheet. Website Metrics DA PA — POSTs website to dapa.php (RapidAPI) to get DA, PA, spam score, DR, org traffic. Re-Format 2 — Pulls the data object from the DA/PA API response for clean mapping. DA PA — Appends DA/PA and related fields into the "DA PA" sheet. Top Baclinks — POSTs website to backlink.php (RapidAPI) to retrieve backlink data. Re-Format 3 — Extracts data.semrushAPI.backlinksOverview (aggregate backlink metrics). Backlinks Overview — Appends overview metrics into the "Backlinks Overview" sheet. Re-Format 4 — Extracts detailed data.semrushAPI.backlinks (individual backlinks list). Backlinks — Appends each backlink row into the "Backlinks" sheet. Competitors Analysis — POSTs website to competitor.php (RapidAPI) to fetch competitors/data sets. Re-Format 5 — Flattens all array datasets under data.semrushAPI into rows with a dataset label. Competitor Analysis — Appends the flattened competitor and keyword rows into the "Competitor Analysis" sheet. 🚀 Use Cases Keyword Research** – Find high-volume, low-competition keywords for content planning. Competitor Analysis** – Identify competitor strategies and ranking keywords. Backlink Insights** – Discover referring domains and link-building opportunities. Domain Authority Checks** – Evaluate site authority before guest posting or partnerships. Content Optimization** – Improve on-page SEO using actionable data. 💡 Benefits One API, Multiple Insights** – No need for multiple SEO tools. Accurate Data** – Get trusted metrics for informed decision-making. Fast Integration** – Simple POST requests for quick setup. Time-Saving** – Automates complex SEO analysis in seconds. Affordable** – Access enterprise-grade SEO insights without breaking the bank. 📍 Start using the *SEO On Page API* today to supercharge your keyword research, backlink tracking, and competitor analysis. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n Save time, stay consistent, and grow your LinkedIn presence effortlessly!
by Growth AI
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Firecrawl batch scraping to Google Docs Who's it for AI chatbot developers, content managers, and data analysts who need to extract and organize content from multiple web pages for knowledge base creation, competitive analysis, or content migration projects. What it does This workflow automatically scrapes content from a list of URLs and converts each page into a structured Google Doc in markdown format. It's designed for batch processing multiple pages efficiently, making it ideal for building AI knowledge bases, analyzing competitor content, or migrating website content to documentation systems. How it works The workflow follows a systematic scraping process: URL Input: Reads a list of URLs from a Google Sheets template Data Validation: Filters out empty rows and already-processed URLs Batch Processing: Loops through each URL sequentially Content Extraction: Uses Firecrawl to scrape and convert content to markdown Document Creation: Creates individual Google Docs for each scraped page Progress Tracking: Updates the spreadsheet to mark completed URLs Final Notification: Provides completion summary with access to scraped content Requirements Firecrawl API key (for web scraping) Google Sheets access Google Drive access (for document creation) Google Sheets template (provided) How to set up Step 1: Prepare your template Copy the Google Sheets template Create your own version for personal use Ensure the sheet has a tab named "Page to doc" List all URLs you want to scrape in the "URL" column Step 2: Configure API credentials Set up the following credentials in n8n: Firecrawl API: For web content scraping and markdown conversion Google Sheets OAuth2: For reading URLs and updating progress Google Drive OAuth2: For creating content documents Step 3: Set up your Google Drive folder The workflow saves scraped content to a specific Drive folder Default folder: "Contenu scrapé" (Content Scraped) Folder ID: 1ry3xvQ9UqM2Rf9C4-AoJdg1lfB9inh_5 (customize this to your own folder) Create your own folder and update the folder ID in the "Create file markdown scraping" node Step 4: Choose your trigger method Option A: Chat interface Use the default chat trigger Send your Google Sheets URL through the chat interface Option B: Manual trigger Replace chat trigger with manual trigger Set the Google Sheets URL as a variable in the "Get URL" node How to customize the workflow URL source customization Sheet name: Change "Page to doc" to your preferred tab name Column structure: Modify field mappings if using different column names URL validation: Adjust filtering criteria for URL format requirements Batch size: The workflow processes all URLs sequentially (no batch size limit) Scraping configuration Firecrawl options: Add specific scraping parameters (wait times, JavaScript rendering) Content format: Currently outputs markdown (can be modified for other formats) Error handling: The workflow continues processing even if individual URLs fail Retry logic: Add retry mechanisms for failed scraping attempts Output customization Document naming: Currently uses the URL as document name (customizable) Folder organization: Create subfolders for different content types File format: Switch from Google Docs to other formats (PDF, TXT, etc.) Content structure: Add headers, metadata, or formatting to scraped content Progress tracking enhancements Status columns: Add more detailed status tracking (failed, retrying, etc.) Metadata capture: Store scraping timestamps, content length, etc. Error logging: Track which URLs failed and why Completion statistics: Generate summary reports of scraping results Use cases AI knowledge base creation E-commerce product pages: Scrape product descriptions and specifications for chatbot training Documentation sites: Convert help articles into structured knowledge base content FAQ pages: Extract customer service information for automated support systems Company information: Gather about pages, services, and team information Content analysis and migration Competitor research: Analyze competitor website content and structure Content audits: Extract existing content for analysis and optimization Website migrations: Backup content before site redesigns or platform changes SEO analysis: Gather content for keyword and structure analysis Research and documentation Market research: Collect information from multiple industry sources Academic research: Gather content from relevant web sources Legal compliance: Document website terms, policies, and disclaimers Brand monitoring: Track content changes across multiple sites Workflow features Smart processing logic Duplicate prevention: Skips URLs already marked as "Scrapé" (scraped) Empty row filtering: Automatically ignores rows without URLs Sequential processing: Handles one URL at a time to avoid rate limiting Progress updates: Real-time status updates in the source spreadsheet Error handling and resilience Graceful failures: Continues processing remaining URLs if individual scrapes fail Status tracking: Clear indication of completed vs. pending URLs Completion notification: Summary message with link to scraped content folder Manual restart capability: Can resume processing from where it left off Results interpretation Organized content output Each scraped page creates: Individual Google Doc: Named with the source URL Markdown formatting: Clean, structured content extraction Metadata preservation: Original URL and scraping timestamp Organized storage: All documents in designated Google Drive folder Progress tracking The source spreadsheet shows: URL list: Original URLs to be processed Status column: "OK" for completed, empty for pending Real-time updates: Progress visible during workflow execution Completion summary: Final notification with access instructions Workflow limitations Sequential processing: Processes URLs one at a time (prevents rate limiting but slower for large lists) Google Drive dependency: Requires Google Drive for document storage Firecrawl rate limits: Subject to Firecrawl API limitations and quotas Single format output: Currently outputs only Google Docs (easily customizable) Manual setup: Requires Google Sheets template preparation before use No content deduplication: Creates separate documents even for similar content
by Meelioo
How it Works This workflow creates automated daily backups of your n8n workflows to a GitLab repository: Scheduled Trigger - Runs automatically at noon each day to initiate the backup process Fetch Workflows - Retrieves all active workflows from your n8n instance, filtering out archived ones Compare & Process - Checks existing files in GitLab and compares them with current workflows Smart Upload - For each workflow, either updates the existing file in GitLab (if it exists) or creates a new one Notification System - Sends success/failure notifications to a designated Slack channel with execution details >The workflow intelligently handles each file individually, cleaning up unnecessary metadata before converting workflows to formatted JSON files ready for version control. Set up Steps Estimated setup time: 15-20 minutes You'll need to configure three credential connections and customize the Configuration node: GitLab API**: Create a project access token with write permissions to your backup repository n8n Internal API**: Generate an API key from your n8n user settings Slack Bot**: Set up a Slack app with bot token permissions for posting messages to your notification channel > Once credentials are configured, update the Configuration node with your GitLab project owner, repository name, and target branch. The workflow includes detailed setup instructions in the sticky notes for each credential type. After setup, activate the workflow to begin daily automated backups.
by Ihor Nikolenko
😎 For Fast-Growing Your Telegram Channel (Lead Magnet Gate) 📋 No plug-and-play workflow This workflow implements a subscription gate for a Telegram lead magnet campaign. Users must subscribe to a Telegram channel before they can access the lead magnet (e.g., a free resource, discount code, or exclusive content). 🇺🇦 Українською Цей воркфлоу реалізує шлюз підписки для кампанії лід-магніту в Telegram. Користувачі повинні підписатися на канал Telegram, перш ніж зможуть отримати доступ до лід-магніту (наприклад, безкоштовного ресурсу, коду знижки або ексклюзивного контенту). 🎥 YouTube Video Integration Watch Tutorial in English - UPD Link After Approve Workflow Дивитись Інструкцію Українською -UPD Link After Approve Workflow NOW Sticky Notes - all have for Implementation 🛠️ Configuration Notes Channel ID - Replace inputyourid with your actual Telegram channel ID (without @) or -100 type for closed channel Bot Token - Replace bot token placeholders with your actual Telegram bot token Lead Magnet - Update the lead magnet delivery message with your actual file/resource links/ webinar link / discount code Upsell Content - Customize the upsell/cross-sell content as needed 🌍 Bilingual Support All user-facing messages are provided in both Ukrainian and English to support international audiences: Ukrainian text appears first English text follows after a line break Buttons include both languages where appropriate. 📈 Use Cases Lead generation for Telegram channels Content gating for exclusive resources Community building through subscription requirements Marketing funnel automation 🤖 Template Features ✅ Ready-to-Use Template Simply import and configure with your Telegram bot credentials. 📚 Comprehensive Documentation Visual sticky notes explaining each node's purpose Detailed workflow documentation Logic explanation notes 🧠 Smart Workflow Design Efficient data flow with minimal API calls Proper error handling and user feedback Responsive button interactions Conditional routing based on subscription status 🚀 Quick Start Guide Import Workflow Download the JSON file Import into your n8n instance (Cloud or Self-hosted) Configure Telegram Credentials Set up your Telegram bot token in the credentials section Ensure your bot has necessary permissions Customize Channel Settings Replace inputyourid with your actual Telegram channel ID Update all placeholder links with your actual resources Personalize Messages Modify lead magnet delivery messages Customize upsell content Watch YouTube tutorial links Test the Workflow Activate the workflow in your n8n instance Test with a non-subscribed account Verify subscription verification works correctly Test the upsell sequence with the /ok command (command you can change) 📄 License This template is provided as-is for use with n8n automation platform. Feel free to modify and adapt to your specific needs. 🙋♂️ Support For issues with this template, please check: All placeholder values have been replaced Telegram bot has proper permissions n8n instance is properly configured Internet connectivity is available https://t.me/nikolenkoclub