by Evoort Solutions
Automated SEO Website Audit with n8n, Google Docs & RapidAPI's SEO Analyzer Description: Use n8n to automate SEO audits with the Website SEO Analyzer and Audit AI from RapidAPI. Capture a URL, run a full audit, and export a structured SEO report to Google Docs — all without manual steps. ⚙️ Node-by-Node Explanation 🟢 formTrigger — On Form Submission Starts the workflow when a user submits a URL through a form. Collects the website to be analyzed. 🌐 httpRequest — Website Audit Sends the submitted URL to the Website SEO Analyzer and Audit AI via a POST request. Fetches detailed SEO data, including meta tags, keyword usage, and technical performance. 🧠 code — Reformat Transforms raw JSON from the Website SEO Analyzer and Audit AI into a structured Markdown summary. Organizes insights into sections like Metadata, Keyword Density, Page Performance, and Security. 📄 googleDocs — Add Data In Google Docs Automatically inserts the formatted SEO audit report into a pre-connected Google Docs file. Allows audit data to be easily shared, tracked, or archived. 🌟 Benefits ✅ Powered by **Website SEO Analyzer and Audit AI:** Leverage a reliable, cloud-based SEO tool via RapidAPI. 🔁 End-to-End SEO Workflow: Fully automates input, audit, formatting, and export to documentation. 📊 Human-Readable Reports: Translates raw API output into structured, insightful summaries. 📂 Centralized Documentation: Stores SEO audits in Google Docs for easy reference and historical tracking. 🚀 Use Cases 📈 SEO Agencies: Generate fast and consistent SEO audits using the Website SEO Analyzer and Audit AI — ideal for client reporting. 🏢 In-House Web Teams: Regularly audit corporate websites and track performance in a document-based SEO log. 🧲 Lead Generation for SEO Services: Offer real-time audits through a public form to attract and qualify leads. 📅 Monthly SEO Health Checks: Automate recurring site audits and log results using n8n and RapidAPI. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n Save time, stay consistent, and grow your LinkedIn presence effortlessly!
by Philippe
Summary This workflow enables the submission of business-critical URLs via the Google Indexing API and IndexNow. Why is this important for SEO? If your objective is visibility within AI-powered search and answer engines (such as Copilot, Perplexity, or OpenAI tools), the IndexNow integration is particularly relevant. IndexNow accelerates URL discovery for Bing and Yandex, which are key retrieval sources for several LLM-based platforms. In parallel, Google remains the dominant search engine, representing ~80% of global search traffic. Gemini is deeply integrated into Google’s ecosystem and, when grounding is enabled, can leverage Google Search as an external retrieval source. Ensuring fast and reliable indexation of critical URLs therefore remains a strategic foundation for both traditional SEO and AI-assisted search experiences. Description This workflow uses OnCrawl API endpoint to automatically discover your sitemaps.xml and submit their latest updates to both Google Indexing API and IndexNOW. It includes two variations: Index orphan pages detected in sitemap.xml and submit them to Google and IndexNow. Index newly released pages by identifying indexable canonical URLs added between a pre-release crawl and a post-release crawl. How it works This workflow works for Oncrawl users with API access enabled in their plan. if you are not an Oncrawl users, please refer to: https://n8n.io/workflows/8778-workflow-for-submitting-changed-sitemap-urls-using-google-indexing-api-and-bing-indexnow/(https://n8n.io/workflows/8778-workflow-for-submitting-changed-sitemap-urls-using-google-indexing-api-and-bing-indexnow/)) To get an API Key, just go in your User Account profile > tokens > + Add API access token: Description: any name Scope: select all checkboxes Click in Create token. Keep your API secret safe Discover & parse Sitemaps Create your first crawl by: Clicking in Create configuration > choose a template > Automate > Webhook. Webhook Node: In n8n, copy paste the Webhook callback URL into the Oncrawl Webhook section. At the end, Oncrawl sends a POST HTTP request to n8n containing: Workspace_ID, Project_ID, Crawl_ID. More details in Webhook Documentation: https://developer.oncrawl.com/#notification Discover_sitemaps endpoint: documentation: https://developer.oncrawl.com/. This endpoint checks the Sitemaps declared in your robots.txt file. You can filter the output to avoid duplicate sitemaps Config: It’s an initiation node that populate variables such as: Crawl_ID: Fetch from Webhook Node SITE_URL: Your site with the following format: https://your-site.com SITEMAP_URL: For subdomain sitemaps, you can duplicate this field. INDEXNOW_KEY: You can create it in the Bing Webmaster tools here https://www.bing.com/indexnow/getstarted INDEXNOW_KEY_URL: it's usually your domain and the INDEXNOW_KEY: wwww.example.com/ <INDEXNOW_KEY> Variables you can update depending on your specs: DAYS_BACK: 7 by default. For Google it's checking the status of the page before submitting but for Indexnow it will ask to index all the pages that been last updated in the last 7 days BATCH_SIZE: 500 it's the default recommended by IndexNow USE_GOOGLE, USE_INDEXNOW: by default it's true which means the process will run for both Google and IndexNow Google Node Check Status Node (OAuth Setup): documentation: https://developers.google.com/webmaster-tools/v1/urlInspection.index/inspect Create credentials: https://console.cloud.google.com/apis/credentials Enable Google Search Console API Download the Client ID / Client Secret JSON Connect n8n using: Client ID Client Secret Scopes Google Search Console account All explanations are contained in these tutorials: https://www.youtube.com/watch?v=HT56wExnN5k | https://www.youtube.com/watch?v=FBGtpWMTppw Scopes reference: https://developers.google.com/identity/protocols/oauth2/scopes Google Index API: Create a service account here https://console.cloud.google.com/iam-admin/serviceaccounts Assign role: Owner Generate a JSON key (contains email + private key) For the two Google API nodes: Authentication: Predefined credential type Credential Type: Google Service Account API Credential configuration: Region: Your project region Service Account Email / Private Key: From the JSON key Enable “Set up for use in HTTP Request node” Scope: https://www.googleapis.com/auth/indexing ⚠️ Important: once you have created a "Service account email" you need to add a user with this email and permission "Owner" in your Google Search Console: https://search.google.com/search-console/users Others Nodes Gate: Google Is USE_GOOGLE = true from Cofig? Check status: Useful to get the coverageState and lastCrawlTime of a given URL given by Google Search Console Loop Over Items: Prevents rate-limiting Switch: Case: coverageState= “Submitted and indexed” -> Push to is New node Case: coverageState= “Crawled - currently not indexed” -> Push to Submit node Is New: URLs from Sitemap with Last modification date AFTER the GoogleLast Crawl date If true, we submit URLs to Index API If false, no need to push that URL for indexation URL Updates doc: https://developers.google.com/search/apis/indexing-api/v3/using-api#gettinginfo Endpoint: https://indexing.googleapis.com/v3/urlNotifications:publish We call the Update URL request Wait: Generates a random delay between 0.30 and 1.50 seconds, rounded to 2 decimals ⚠️ Google alternative to batch index URLs consists in using Premium Service to by pass the URL inspection tool: https://fr.speedyindex.com/ IndexNow auto-submitting documentation: https://www.bing.com/indexnow/getstarted Gate: IndexNow: Is USE_INDEXNOW is true from Config? Split in Batches: split in batch of 500 URLs max to avoid rate Limiting issues Build IndexNow payload: description in the node name IndexNow Submit: Submit the URLs to indexNow VariationA: Index orphan pages API documentation: https://developer.oncrawl.com/#Data-API OQL definition: Get orphan pages for both sitemaps & logs Merge node: Merge Items that InnerJoin loc, url fields. This is useful to recover the lastmod from Orphan pages referenced into Sitemaps. This data can be shared into Google Node afterward. Input1 should be: "Assign mandatory sitemap fields" Node Next nodes change "Set Node" name in the script variables VariationB: Index newly added pages between a Crawl 1 & a Crawl2 API documentation: https://developer.oncrawl.com/#Data-API OQL definition: Returns indexable canonical pages added in Crawl 2 Merge node: Merge Items that match between loc, url fields. This is useful to recover the lastmod data for Google Node Input1 should be: "Assign mandatory sitemap fields" Node Next nodes change "Set Node" name in the script variables
by Evoort Solutions
Analyze Webpages with Landing Page Analyzer AI & Generate Google Docs Reports (CRO) Description This workflow integrates the Landing Page Analyzer AI to automatically audit landing pages, format the insights into a conversion-focused report, and save it directly into Google Docs. It leverages the Landing Page Analyzer AIto grade your page, highlight strengths, and suggest improvements—all without manual steps. Nodes Explanation On form submission Captures the URL of the landing page entered by the user to trigger the workflow. Serves as the entry point to pass the URL to the Landing Page Analyzer AI. WebPage Analyzer (API Call via RapidAPI) Sends the URL to the Landing Page Analyzer AI for audit data. Retrieves key analytics: grade, score, suggestions, strengths, and conversion metrics. Reformat (Code Node) Converts the raw JSON from the Landing Page Analyzer AI into structured Markdown. Builds sections for grade, overall score, suggestions, strengths, and score breakdown. Upload In Google Docs Inserts the formatted Markdown report into a predefined Google Document. Ensures the audit output from the Landing Page Analyzer AI is saved and shareable. Benefits of This Workflow Hands-Free Audits: Automatically performs a landing page evaluation using the powerful **Landing Page Analyzer AI. Consistent, Professional Reports**: Standardized Markdown formatting ensures clarity and readability. Effortless Documentation**: Results are directly stored in Google Docs—no manual copying required. Scalable & Repeatable**: Ideal for continuous optimization across multiple pages or campaigns. Use Cases SEO & CRO Agencies: Quickly generate conversion audit reports using the **Landing Page Analyzer AI to optimize client landing pages at scale. Marketing Teams**: Automate weekly or campaign-based auditing of landing pages, with results logged in Google Docs for easy sharing and review. Freelancers & Consultants: Deliver polished, data-driven conversion reports to clients—powered by **Landing Page Analyzer AI via RapidAPI—without repetitive manual work. Growth Hackers & Product Managers**: Monitor iterations of landing pages over time; each version can be audited automatically and archived in Docs for comparison. 🔐 How to Get Your API Key for the Landing Page Analyzer AI API Go to 👉 Landing Page Analyzer AI Click "Subscribe to Test" (you may need to sign up or log in). Choose a pricing plan (there’s a free tier for testing). After subscribing, click on the "Endpoints" tab. Your API Key will be visible in the "x-rapidapi-key" header. 🔑 Copy and paste this key into the httpRequest node in your workflow. Conclusion This n8n workflow streamlines landing page optimization by leveraging the Landing Page Analyzer AI, transforming raw audit output into insightful, presentation-ready reports in Google Docs. Perfect for teams and individuals focused on data-driven improvements, scalability, and efficiency. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n
by Oneclick AI Squad
This workflow automates the process of receiving vendor quotations, extracting and summarizing their contents using AI, and logging the results for comparison. The system listens for new file uploads via webhook, processes each file using a summarization engine, and generates a well-formatted summary table that is stored in Google Sheets and sent via email to stakeholders. Good to Know Saves hours of manual work** by auto-comparing multiple vendor quotations. Uses AI summarization** to intelligently identify highlights and differences in each quote. Supports structured summaries** for quick stakeholder decision-making. Maintains a Google Sheets log** for historical comparison and auditing. Email notifications** ensure stakeholders receive real-time updates. How It Works 1. Upload Quotes Webhook trigger that listens for uploaded vendor quotation files (PDF, Excel, or Docs). 2. Extract File Data Parses the uploaded file and extracts relevant quote data (price, items, vendor name, etc.). 3. AI Summarization Sends extracted data to an AI API (Grok) to generate a human-readable comparison summary. 4. Wait For Reply Pauses the workflow until the AI response is fully received. 5. Format Summary Formats the AI-generated content into a structured summary (e.g., table format or comparison bullets). 6. Log to Google Sheets Appends the formatted summary to a Google Sheet for tracking and reference. 7. Send Email Emails the summary to predefined recipients (procurement, finance, etc.). Data Sources Uploaded Vendor Quotation Files** – Typically in PDF, DOCX, or Excel format containing vendor proposals. AI API (Grok)** – Processes the quote data and returns a summarized comparison. How to Use Import the workflow into your n8n instance (self-hosted or cloud). Configure the Webhook URL to receive file uploads. Set up file extraction logic in the “Extract File Data” node to match your file format. Configure your Grok API credentials in the “AI Summarization” node. Connect your Google Sheets account to the “Log to Google Sheets” node. Customize the recipient email address in the “Send Email” node. Test with sample quotation files to validate the entire flow. Requirements Self-hosted n8n instance** (if using community nodes). API key for Grok** or another AI summarization service. Google account access** to log summary data to Sheets. Mail credentials** for sending automated emails (SMTP setup). File parsing logic** (for PDFs, DOCX, Excel) depending on your vendor formats. Customizing This Workflow Modify the Extract File Data node** to support additional quote formats or fields. Enhance AI Summarization** with custom prompts or models for industry-specific terms. Format output into a PDF summary** or comparison chart if needed. Add Slack/Teams integration** for real-time team alerts. Apply filters** to compare only specific vendors or line items.
by Sk developer
Automated Keyword Analysis and Google Sheets Logging Automate keyword research with n8n and log essential SEO data like search volume, trends, competition, and keyword difficulty directly into Google Sheets. Simplify your SEO efforts with real-time insights. Node-by-Node Explanation 1. On form submission (Trigger) Purpose:** Triggers the workflow when a user submits the form with "country" and "keyword" as inputs. Explanation:** This node initiates the process by accepting user input from the form and passing it to the next node for analysis. 2. Keyword Analysis (HTTP Request) Purpose:** Sends a request to an external SEO API to analyze the provided keyword, fetching data like search volume, trends, and competition. Explanation:* This node calls the *Keyword Research Tool API** with the country and keyword inputs from the form, retrieving essential keyword data for further processing. 3. Re-format output (Code) Purpose:** Processes and reformats the API response into a structured format suitable for logging into Google Sheets. Explanation:** Extracts and organizes the keyword data (e.g., competition, CPC, search volume) into a format that can be easily mapped to Google Sheets columns. 4. Google Sheets (Append) Purpose:** Appends the reformatted keyword data into the specified Google Sheets document. Explanation:** Logs the fetched keyword insights into a Google Sheets document, allowing for continuous tracking and analysis. Benefits of This Workflow Automated Keyword Research:* Eliminates manual keyword research by automating the entire process using the *Keyword Research Tool API**. Real-time Data Tracking:* Fetches up-to-date SEO metrics from the *Keyword Research Tool API** and logs them directly into Google Sheets for easy access and analysis. Efficient Workflow:** Saves time by integrating multiple tools (form, SEO API, Google Sheets) into one seamless process. SEO Insights:* Provides detailed insights like search volume, trends, competition, and keyword difficulty, aiding in strategic decision-making with the help of the *Keyword Research Tool API**. Use Case This workflow is ideal for digital marketers, SEO professionals, and content creators who need to analyze keyword performance and track essential SEO metrics efficiently. It automates the process of keyword research by calling the Keyword Research Tool API, fetching relevant data, and logging it into Google Sheets. This makes it easier to monitor and optimize SEO strategies in real-time.
by Sk developer
Backlink Checker with Google Sheets Logging (Seo) Description: This workflow helps you analyze top backlinks using Semrush API and logs the results directly into Google Sheets for easy SEO tracking and reporting. It integrates the Top Backlink Checker API from RapidAPI, providing in-depth backlink analysis, and combines that with Google Sheets for efficient data storage and tracking. Node-by-Node Explanation: 1. On form submission Captures the website URL submitted by the user through a form. This node triggers the workflow when the form is filled with a website URL. The Top Backlink Checker API (via RapidAPI) is used to check backlinks after this step. 2. Check webTraffic Sends a request to the Top Backlink Checker API to gather traffic data for the submitted website. This includes important metrics like visits, bounce rate, and more, which will later be stored in Google Sheets for analysis. 3. Reformat output Extracts and re-formats the traffic data received from the Top Backlink Checker API. This node cleans and structures the raw data for easier processing, ensuring it is usable for later stages in the workflow. 4. Reformat Processes the backlink data received from the Top Backlink Checker API (RapidAPI). The data is reformatted and structured to be added to Google Sheets for storage, making it easier to analyze. 5. Backlink overview Appends the re-formatted backlink overview data into a Google Sheets document. This stores important backlink information like source URLs, anchor texts, and more, making it available for later analysis and reporting. 6. Backlinks Appends detailed backlink data, including target URLs, anchors, and internal/external links, into Google Sheets. This helps track individual backlinks, their attributes, and page scores, allowing for deeper SEO analysis and reporting. Benefits and Use Cases: Benefits: Backlink Tracking: The integration of the **Top Backlink Checker API helps you track all the backlinks associated with a website. You can get insights on the source URL, anchor text, first and last seen, and more. Traffic Insights: By integrating **Top Backlink Checker API, this workflow allows you to monitor important website traffic data such as visits, bounce rates, and organic reach, helping with SEO strategies. Automated Google Sheets Logging**: All traffic and backlink data is logged automatically into Google Sheets for easy access and future analysis. This avoids manual data entry and ensures consistency. Efficient Workflow: The automation provided by **n8n streamlines your SEO analysis workflow, ensuring that data is formatted, structured, and updated without any manual intervention. Use Cases: SEO Reports**: Generate regular SEO reports by tracking backlinks and traffic data automatically from Semrush and Top Backlink Checker, saving time and ensuring accurate reporting. Competitor Analysis: Analyze your competitors’ backlinks and traffic to stay ahead in SEO rankings by leveraging data from the **Top Backlink Checker API. Backlink Management: Use the data from **Top Backlink Checker API to assess the health of backlinks, ensuring that high-value backlinks are tracked, and toxic backlinks are identified for removal or disavow. SEO Campaign Tracking**: Monitor how backlinks and website traffic evolve over time to evaluate the effectiveness of your SEO campaigns, keeping all your data in Google Sheets for easy tracking.
by Evoort Solutions
Automate YouTube Channel Metadata Extraction to Google Docs Description: This workflow leverages the powerful YouTube Metadata API to automatically extract detailed metadata from any YouTube channel URL. Using the YouTube Metadata API, it collects information like subscribers, views, keywords, and banners, reformats it for readability, and saves it directly to Google Docs for easy sharing and record-keeping. Ideal for marketers, content creators, and analysts looking to streamline YouTube channel data collection. By integrating the YouTube Metadata, this workflow ensures accurate and up-to-date channel insights fetched instantly from the source. Node-by-Node Explanation 1. On form submission Triggers the workflow when a user submits a YouTube channel URL via a web form, starting the metadata extraction process. 2. YouTube Channel Metadata (HTTP Request) Calls the YouTube Metadata API with the provided channel URL to retrieve comprehensive channel details like title, subscriber count, and banner images. 3. Reformat (Code) Transforms the raw API response into a clean, formatted string with emojis and markdown styling for easy reading and better presentation. 4. Add Data in Google Docs Appends the formatted channel metadata into a specified Google Docs document, providing a centralized and accessible record of the data. Benefits of This Workflow Automated Data Collection:* Eliminates manual effort by automatically extracting YouTube channel data via the *YouTube Metadata API**. Accurate & Reliable:** Ensures data accuracy by using a trusted API source, keeping metadata current. Improved Organization:** Saves data in Google Docs, allowing for easy sharing, editing, and collaboration. User-Friendly:** Simple form-based trigger lets anyone gather channel info without technical knowledge. Scalable & Flexible:** Can process multiple URLs easily, perfect for marketing or research teams handling numerous channels. Use Cases Marketing Teams:** Track competitor YouTube channel stats and trends for strategic planning. Content Creators:** Monitor channel growth metrics and optimize content strategy accordingly. Researchers:** Collect and analyze YouTube channel data for academic or market research projects. Social Media Managers:** Automate reporting by documenting channel performance metrics in Google Docs. Businesses:** Maintain up-to-date records of brand or partner YouTube channels efficiently. By leveraging the YouTube Metadata, this workflow provides an efficient, scalable solution to extract and document YouTube channel metadata with minimal manual input. 🔑 How to Get Your API Key for YouTube Metadata API Visit the API Page: Go to the YouTube Metadata on RapidAPI. Sign Up/Login: Create an account or log in if you already have one. Subscribe to the API: Click "Subscribe to Test" and choose a plan (free or paid). Copy Your API Key: After subscribing, your API Key will be available in the "X-RapidAPI-Key" section under "Endpoints". Use the Key: Include the key in your API requests like this: -H "X-RapidAPI-Key: YOUR_API_KEY" Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n
by Evoort Solutions
SEO On Page API – Complete Guide, Use Cases & Benefits The SEO On Page API is a powerful tool for keyword research, competitor analysis, backlink insights, and overall SEO optimization. With multiple endpoints, you can instantly gather actionable SEO data without juggling multiple tools. You can explore and subscribe via SEO On Page API. 📌 Description The SEO On Page API on SEO On Page API allows you to quickly analyze websites, keywords, backlinks, and competitors — all in one place. Ideal for SEO professionals, marketers, and developers who want fast, accurate, and easy-to-integrate data. Node-by-node Overview On form submission — Shows a web form (field: website) and triggers the workflow on submit. Global Storage — Copies website (and optional country) into the execution JSON for reuse. Website Traffic Cheker — POSTs website to webtraffic.php (RapidAPI) to fetch traffic summary. Re-Format — Extracts data.semrushAPI.trafficSummary[0] from the traffic API response. Website Traffic — Appends traffic metrics (visits, users, bounce, etc.) to the "WebSite Traffic" sheet. Website Metrics DA PA — POSTs website to dapa.php (RapidAPI) to get DA, PA, spam score, DR, org traffic. Re-Format 2 — Pulls the data object from the DA/PA API response for clean mapping. DA PA — Appends DA/PA and related fields into the "DA PA" sheet. Top Baclinks — POSTs website to backlink.php (RapidAPI) to retrieve backlink data. Re-Format 3 — Extracts data.semrushAPI.backlinksOverview (aggregate backlink metrics). Backlinks Overview — Appends overview metrics into the "Backlinks Overview" sheet. Re-Format 4 — Extracts detailed data.semrushAPI.backlinks (individual backlinks list). Backlinks — Appends each backlink row into the "Backlinks" sheet. Competitors Analysis — POSTs website to competitor.php (RapidAPI) to fetch competitors/data sets. Re-Format 5 — Flattens all array datasets under data.semrushAPI into rows with a dataset label. Competitor Analysis — Appends the flattened competitor and keyword rows into the "Competitor Analysis" sheet. 🚀 Use Cases Keyword Research** – Find high-volume, low-competition keywords for content planning. Competitor Analysis** – Identify competitor strategies and ranking keywords. Backlink Insights** – Discover referring domains and link-building opportunities. Domain Authority Checks** – Evaluate site authority before guest posting or partnerships. Content Optimization** – Improve on-page SEO using actionable data. 💡 Benefits One API, Multiple Insights** – No need for multiple SEO tools. Accurate Data** – Get trusted metrics for informed decision-making. Fast Integration** – Simple POST requests for quick setup. Time-Saving** – Automates complex SEO analysis in seconds. Affordable** – Access enterprise-grade SEO insights without breaking the bank. 📍 Start using the *SEO On Page API* today to supercharge your keyword research, backlink tracking, and competitor analysis. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n Save time, stay consistent, and grow your LinkedIn presence effortlessly!
by Growth AI
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Firecrawl batch scraping to Google Docs Who's it for AI chatbot developers, content managers, and data analysts who need to extract and organize content from multiple web pages for knowledge base creation, competitive analysis, or content migration projects. What it does This workflow automatically scrapes content from a list of URLs and converts each page into a structured Google Doc in markdown format. It's designed for batch processing multiple pages efficiently, making it ideal for building AI knowledge bases, analyzing competitor content, or migrating website content to documentation systems. How it works The workflow follows a systematic scraping process: URL Input: Reads a list of URLs from a Google Sheets template Data Validation: Filters out empty rows and already-processed URLs Batch Processing: Loops through each URL sequentially Content Extraction: Uses Firecrawl to scrape and convert content to markdown Document Creation: Creates individual Google Docs for each scraped page Progress Tracking: Updates the spreadsheet to mark completed URLs Final Notification: Provides completion summary with access to scraped content Requirements Firecrawl API key (for web scraping) Google Sheets access Google Drive access (for document creation) Google Sheets template (provided) How to set up Step 1: Prepare your template Copy the Google Sheets template Create your own version for personal use Ensure the sheet has a tab named "Page to doc" List all URLs you want to scrape in the "URL" column Step 2: Configure API credentials Set up the following credentials in n8n: Firecrawl API: For web content scraping and markdown conversion Google Sheets OAuth2: For reading URLs and updating progress Google Drive OAuth2: For creating content documents Step 3: Set up your Google Drive folder The workflow saves scraped content to a specific Drive folder Default folder: "Contenu scrapé" (Content Scraped) Folder ID: 1ry3xvQ9UqM2Rf9C4-AoJdg1lfB9inh_5 (customize this to your own folder) Create your own folder and update the folder ID in the "Create file markdown scraping" node Step 4: Choose your trigger method Option A: Chat interface Use the default chat trigger Send your Google Sheets URL through the chat interface Option B: Manual trigger Replace chat trigger with manual trigger Set the Google Sheets URL as a variable in the "Get URL" node How to customize the workflow URL source customization Sheet name: Change "Page to doc" to your preferred tab name Column structure: Modify field mappings if using different column names URL validation: Adjust filtering criteria for URL format requirements Batch size: The workflow processes all URLs sequentially (no batch size limit) Scraping configuration Firecrawl options: Add specific scraping parameters (wait times, JavaScript rendering) Content format: Currently outputs markdown (can be modified for other formats) Error handling: The workflow continues processing even if individual URLs fail Retry logic: Add retry mechanisms for failed scraping attempts Output customization Document naming: Currently uses the URL as document name (customizable) Folder organization: Create subfolders for different content types File format: Switch from Google Docs to other formats (PDF, TXT, etc.) Content structure: Add headers, metadata, or formatting to scraped content Progress tracking enhancements Status columns: Add more detailed status tracking (failed, retrying, etc.) Metadata capture: Store scraping timestamps, content length, etc. Error logging: Track which URLs failed and why Completion statistics: Generate summary reports of scraping results Use cases AI knowledge base creation E-commerce product pages: Scrape product descriptions and specifications for chatbot training Documentation sites: Convert help articles into structured knowledge base content FAQ pages: Extract customer service information for automated support systems Company information: Gather about pages, services, and team information Content analysis and migration Competitor research: Analyze competitor website content and structure Content audits: Extract existing content for analysis and optimization Website migrations: Backup content before site redesigns or platform changes SEO analysis: Gather content for keyword and structure analysis Research and documentation Market research: Collect information from multiple industry sources Academic research: Gather content from relevant web sources Legal compliance: Document website terms, policies, and disclaimers Brand monitoring: Track content changes across multiple sites Workflow features Smart processing logic Duplicate prevention: Skips URLs already marked as "Scrapé" (scraped) Empty row filtering: Automatically ignores rows without URLs Sequential processing: Handles one URL at a time to avoid rate limiting Progress updates: Real-time status updates in the source spreadsheet Error handling and resilience Graceful failures: Continues processing remaining URLs if individual scrapes fail Status tracking: Clear indication of completed vs. pending URLs Completion notification: Summary message with link to scraped content folder Manual restart capability: Can resume processing from where it left off Results interpretation Organized content output Each scraped page creates: Individual Google Doc: Named with the source URL Markdown formatting: Clean, structured content extraction Metadata preservation: Original URL and scraping timestamp Organized storage: All documents in designated Google Drive folder Progress tracking The source spreadsheet shows: URL list: Original URLs to be processed Status column: "OK" for completed, empty for pending Real-time updates: Progress visible during workflow execution Completion summary: Final notification with access instructions Workflow limitations Sequential processing: Processes URLs one at a time (prevents rate limiting but slower for large lists) Google Drive dependency: Requires Google Drive for document storage Firecrawl rate limits: Subject to Firecrawl API limitations and quotas Single format output: Currently outputs only Google Docs (easily customizable) Manual setup: Requires Google Sheets template preparation before use No content deduplication: Creates separate documents even for similar content
by Sk developer
Automated Video Generation, Google Drive Upload, and Email Notification with Veo 3 Fast API This workflow automates the process of generating videos using the Veo 3 Fast API, uploading the video to Google Drive, and notifying the user via email. All tasks are executed seamlessly, ensuring a smooth user experience with automatic error handling. Node-by-Node Explanation On Form Submission: Triggers the workflow when a user submits a form with a prompt. Veo 3 Fast API Processor: Sends the user's prompt to the Veo 3 Fast API to generate a video. Wait for API Response: Pauses the workflow for 35 seconds to allow the API response. API Request: Check Task Status: Sends a request to check the status of the video generation task. Condition: Task Output Status: Evaluates whether the task was successful, still processing, or failed. Wait for Task to Complete: Pauses the workflow for 30 seconds to recheck the task status if processing is ongoing. Send Email: API Error - Task Failed: Sends an email if the task fails to generate the video. Send Email: API Error - Task ID Missing: Sends an email if the task ID is missing in the response. Download Video: Downloads the processed video from the provided output URL. Upload File to Google Drive: Uploads the processed video to the user's Google Drive. Set Google Drive Permissions: Sets the necessary sharing permissions for the uploaded video. Send an Email: Video Link: Sends an email with the link to the uploaded video. How to Obtain a RapidAPI Key Go to Veo 3 Fast on RapidAPI. Create an account or log in. Subscribe to the API plan that suits your needs. After subscription, find your API Key in the "Keys & Access" section. How to Configure Google Drive API Go to Google Cloud Console. Create a new project or select an existing one. Enable the Google Drive API for the project. Go to Credentials and create OAuth 2.0 credentials. Add the credentials to your n8n Google Drive node for seamless access to your Google Drive. Use Case Use Case**: A content creation team can automate the video production process, upload videos to Google Drive, and share them with stakeholders instantly after the task is complete. Benefits Efficiency**: Reduces manual tasks, saving time and effort by automating video creation and file management. Error Handling**: Sends notifications for task failures or missing data, ensuring quick resolutions. Seamless Integration**: Automatically uploads files to Google Drive and shares the link with users, streamlining the workflow. Who Is This For Content Creators**: Automates video creation and file management. Marketing Teams**: Quick and easy video generation for campaigns. Developers**: Can integrate with APIs and automate tasks. Business Teams**: Save time by automating repetitive tasks like file uploads and email notifications.
by Sk developer
Automated IMDB Video Downloader: Download, Upload to Google Drive & Notify via Email Easily download IMDB videos via a user-friendly form. Automatically fetch video links using the IMDB Downloader API, save videos to Google Drive, and notify users via email with shareable links or failure alerts. Perfect for content creators and marketers. Node-by-Node Explanation On form submission**: Triggers the workflow when a user submits an IMDB video URL via a form. Fetch IMDB Video Info from API: Sends the URL to the **IMDB Downloader API to get video metadata and download links. Check API Response Status**: Verifies if the API responded successfully (status code 200). Download Video File**: Downloads the video from the provided media URL. Upload Video to Google Drive**: Uploads the downloaded video file to a specified Google Drive folder. Google Drive Set Permission**: Sets sharing permissions on the uploaded video for easy access. Success Notification Email with Drive Link**: Emails the user the Google Drive link to access the video. Processing Delay**: Adds a wait time before sending failure notifications. Failure Notification Email**: Emails the user if the video download or processing fails. How to Obtain Your RapidAPI Key Go to RapidAPI's IMDB Downloader API page. Sign up or log in to your RapidAPI account. Subscribe to the IMDB Downloader API. Find your unique x-rapidapi-key in the dashboard under the API keys section. Replace "your key" in your workflow headers with this key to authenticate requests. Use Cases & Benefits Use Cases Content creators downloading trailers or clips quickly. Marketing teams preparing video content for campaigns. Educators sharing film excerpts. Social media managers sourcing videos efficiently. Benefits Fully automates video download and upload workflow. Seamless Google Drive integration with sharing. Instant user notifications on success or failure. User-friendly with simple URL form submission. Who Is This For? Content creators** looking for fast video downloads. Marketers** needing instant access to IMDB clips. Educators** requiring film excerpts for lessons. Social media managers** preparing engaging content. Any user wanting hassle-free IMDB video downloads with cloud storage.
by Philippe
Summary This workflow enables the submission of business-critical URLs via the Google Indexing API and IndexNow. Why is this important for SEO? If your objective is visibility within AI-powered search and answer engines (such as Copilot, Perplexity, or OpenAI tools), the IndexNow integration is particularly relevant. IndexNow accelerates URL discovery for Bing and Yandex, which are key retrieval sources for several LLM-based platforms. In parallel, Google remains the dominant search engine, representing ~80% of global search traffic. Gemini is deeply integrated into Google’s ecosystem and, when grounding is enabled, can leverage Google Search as an external retrieval source. Ensuring fast and reliable indexation of critical URLs therefore remains a strategic foundation for both traditional SEO and AI-assisted search experiences. Description This workflow uses OnCrawl API endpoint to automatically discover your sitemaps.xml and submit their latest updates to both Google Indexing API and IndexNOW. It includes two variations: Index orphan pages detected in sitemap.xml and submit them to Google and IndexNow. Index newly released pages by identifying indexable canonical URLs added between a pre-release crawl and a post-release crawl. How it works This workflow works for Oncrawl users with API access enabled in their plan. if you are not an Oncrawl users, please refer to: https://n8n.io/workflows/8778-workflow-for-submitting-changed-sitemap-urls-using-google-indexing-api-and-bing-indexnow/ To get an API Key, just go in your User Account profile > tokens > + Add API access token: Description: any name Scope: select all checkboxes Click in Create token. Keep your API secret safe Discover & parse Sitemaps Create your first crawl by: Clicking in Create configuration > choose a template > Automate > Webhook. Webhook Node: In n8n, copy paste the Webhook callback URL into the Oncrawl Webhook section. At the end, Oncrawl sends a POST HTTP request to n8n containing: Workspace_ID, Project_ID, Crawl_ID. More details in Webhook Documentation: https://developer.oncrawl.com/#notification Discover_sitemaps endpoint: documentation: https://developer.oncrawl.com/. This endpoint checks the Sitemaps declared in your robots.txt file. You can filter the output to avoid duplicate sitemaps Config: It’s an initiation node that populate variables such as: Crawl_ID: Fetch from Webhook Node SITE_URL: Your site with the following format: https://your-site.com SITEMAP_URL: For subdomain sitemaps, you can duplicate this field. INDEXNOW_KEY: You can create it in the Bing Webmaster tools here https://www.bing.com/indexnow/getstarted INDEXNOW_KEY_URL: it's usually your domain and the INDEXNOW_KEY: wwww.example.com/ <INDEXNOW_KEY> Variables you can update depending on your specs: DAYS_BACK: 7 by default. BATCH_SIZE: 500 it's the default recommended by IndexNow USE_GOOGLE, USE_INDEXNOW: by default it's true which means the process will run for both Google and IndexNow Google Node Check Status Node (OAuth Setup): documentation: https://developers.google.com/webmaster-tools/v1/urlInspection.index/inspect Create credentials: https://console.cloud.google.com/apis/credentials Enable Google Search Console API Download the Client ID / Client Secret JSON Connect n8n using: Client ID Client Secret Scopes Google Search Console account All explanations are contained in these tutorials: https://www.youtube.com/watch?v=HT56wExnN5k | https://www.youtube.com/watch?v=FBGtpWMTppw Scopes reference: https://developers.google.com/identity/protocols/oauth2/scopes Google Index API: Create a service account here https://console.cloud.google.com/iam-admin/serviceaccounts Assign role: Owner Generate a JSON key (contains email + private key) For the two Google API nodes: Authentication: Predefined credential type Credential Type: Google Service Account API Credential configuration: Region: Your project region Service Account Email / Private Key: From the JSON key Enable “Set up for use in HTTP Request node” Scope: https://www.googleapis.com/auth/indexing ⚠️ Important: once you have created a "Service account email" you need to add a user with this email and permission "Owner" in your Google Search Console: https://search.google.com/search-console/users Others Nodes Gate: Google Is USE_GOOGLE = true from Cofig? Check status: Useful to get the coverageState and lastCrawlTime of a given URL given by Google Search Console Loop Over Items: Prevents rate-limiting Switch: Case: coverageState= “Submitted and indexed” -> Push to "isNew" node Case: coverageState= “Crawled - currently not indexed” -> Push to "URL Updated" node Is New: URLs from Sitemap with Last modification date AFTER the GoogleLast Crawl date If true, we submit URLs to Index API If false, no need to push that URL for indexation URL Updates doc: https://developers.google.com/search/apis/indexing-api/v3/using-api#gettinginfo Endpoint: https://indexing.googleapis.com/v3/urlNotifications:publish We call the Update URL request Wait: Generates a random delay between 0.30 and 1.50 seconds, rounded to 2 decimals ⚠️ Google alternative to batch index URLs consists in using Premium Service to by pass the URL inspection tool: https://fr.speedyindex.com/ IndexNow auto-submitting documentation: https://www.bing.com/indexnow/getstarted Gate: IndexNow: Is USE_INDEXNOW is true from Config? Split in Batches: split in batch of 500 URLs max to avoid rate Limiting issues Build IndexNow payload: description in the node name IndexNow Submit: Submit the URLs to indexNow VariationA: Index orphan pages API documentation: https://developer.oncrawl.com/#Data-API OQL definition: Get orphan pages for both sitemaps & logs Merge node: Merge Items that InnerJoin loc, url fields. This is useful to recover the lastmod from Orphan pages referenced into Sitemaps. This data can be shared into Google Node afterward. Input1 should be: "Assign mandatory sitemap fields" Node Next nodes change "Set Node" name in the script variables VariationB: Index newly added pages between a Crawl 1 & a Crawl2 API documentation: https://developer.oncrawl.com/#Data-API OQL definition: Returns indexable canonical pages added in Crawl 2 Merge node: Merge Items that match between loc, url fields. This is useful to recover the lastmod data for Google Node Input1 should be: "Assign mandatory sitemap fields" Node Next nodes change "Set Node" name in the script variables