by R4wd0G
Who’s it for Teams that manage tasks in ClickUp and want those tasks reflected—and kept in sync—in Google Calendar automatically. How it works A ClickUp Trigger captures task events (create, update, delete). For new tasks, the workflow creates a Google Calendar event with the correct start/end. It stores a mapping between clickupTaskId and calendarEventId in a Google Sheet so later updates and deletions can target the right event. Multiple lanes (personal/school/tech/internship) let you route tasks to different calendars. How to set up Assign ClickUp OAuth, Google Calendar, and Google Sheets credentials to the nodes. Open the Configuration node and fill: calendarId_* for each lane sheetId and sheetTabName for the mapping sheet (optional) clickupTeamId Enable the ClickUp Trigger and run a manual test to validate mapping creation and event syncing. Requirements ClickUp workspace with OAuth permissions Google Calendar & Sheets access A Google Sheet for the event↔task mapping How to customize the workflow Edit the calendar routing in Edit Fields nodes or point them to different calendarId_* values. Adjust event colors/fields in Google Calendar nodes. Extend the mapping sheet with extra columns (e.g., status, labels) as needed.
by Ihor Nikolenko
😎 For Fast-Growing Your Telegram Channel (Lead Magnet Gate) 📋 No plug-and-play workflow This workflow implements a subscription gate for a Telegram lead magnet campaign. Users must subscribe to a Telegram channel before they can access the lead magnet (e.g., a free resource, discount code, or exclusive content). 🇺🇦 Українською Цей воркфлоу реалізує шлюз підписки для кампанії лід-магніту в Telegram. Користувачі повинні підписатися на канал Telegram, перш ніж зможуть отримати доступ до лід-магніту (наприклад, безкоштовного ресурсу, коду знижки або ексклюзивного контенту). 🎥 YouTube Video Integration Watch Tutorial in English - UPD Link After Approve Workflow Дивитись Інструкцію Українською -UPD Link After Approve Workflow NOW Sticky Notes - all have for Implementation 🛠️ Configuration Notes Channel ID - Replace inputyourid with your actual Telegram channel ID (without @) or -100 type for closed channel Bot Token - Replace bot token placeholders with your actual Telegram bot token Lead Magnet - Update the lead magnet delivery message with your actual file/resource links/ webinar link / discount code Upsell Content - Customize the upsell/cross-sell content as needed 🌍 Bilingual Support All user-facing messages are provided in both Ukrainian and English to support international audiences: Ukrainian text appears first English text follows after a line break Buttons include both languages where appropriate. 📈 Use Cases Lead generation for Telegram channels Content gating for exclusive resources Community building through subscription requirements Marketing funnel automation 🤖 Template Features ✅ Ready-to-Use Template Simply import and configure with your Telegram bot credentials. 📚 Comprehensive Documentation Visual sticky notes explaining each node's purpose Detailed workflow documentation Logic explanation notes 🧠 Smart Workflow Design Efficient data flow with minimal API calls Proper error handling and user feedback Responsive button interactions Conditional routing based on subscription status 🚀 Quick Start Guide Import Workflow Download the JSON file Import into your n8n instance (Cloud or Self-hosted) Configure Telegram Credentials Set up your Telegram bot token in the credentials section Ensure your bot has necessary permissions Customize Channel Settings Replace inputyourid with your actual Telegram channel ID Update all placeholder links with your actual resources Personalize Messages Modify lead magnet delivery messages Customize upsell content Watch YouTube tutorial links Test the Workflow Activate the workflow in your n8n instance Test with a non-subscribed account Verify subscription verification works correctly Test the upsell sequence with the /ok command (command you can change) 📄 License This template is provided as-is for use with n8n automation platform. Feel free to modify and adapt to your specific needs. 🙋♂️ Support For issues with this template, please check: All placeholder values have been replaced Telegram bot has proper permissions n8n instance is properly configured Internet connectivity is available https://t.me/nikolenkoclub
by Jimleuk
Working with Large Documents In Your VLM OCR Workflow Document workflows are popular ways to use AI but what happens when your document is too large for your app or your AI to handle? Whether its context window or application memory that's grinding to a halt, Subworkflow.ai is one approach to keep you going. > Subworkflow.ai is a third party API service to help AI developers work with documents too large for context windows and runtime memory. Prequisites You'll need a Subworkflow.ai API key to use the Subworkflow.ai service. Add the API key as a header auth credential. More details in the official docs https://docs.subworkflow.ai/category/api-reference How it Works Import your document into your n8n workflow Upload it to the Subworkflow.ai service via the Extract API using the HTTP node. This endpoint takes files up to 100mb. Once uploaded, this will trigger an Extract job on the service's side and the response is a "job" record to track progress. Poll Subworkflow.ai's Jobs endpoint and keep polling until the job is finished. You can use the "IF" node looping back unto itself to achieve this in n8n. Once the job is done, the Dataset of the uploaded document is ready for retrieval. Use the Datasets and DatasetItems API to retrieve whatever you need to complete your AI task. In this example, all pages are retrieved and run through a multimodal LLM to parse into markdown. A well-known process when parsing data tables or graphics are required. How to use Integrate Subworkflow's Extract API seemlessly into your existing document workflows to support larger documents from 100mb+ to up to 5000 pages. Customising the workflow Sometimes you don't want the entire document back especially if the document is quite large (think 500+ pages!), instead, use query parameters on the DatasetItems API to pick individual pages or a range of pages to reduce the load. Need Help? Official API documentation**: https://docs.subworkflow.ai/category/api-reference Join the discord**: https://discord.gg/RCHeCPJnYw
by Davide
This workflow implements a Retrieval-Augmented Generation (RAG) system using Google Gemini's File Search API. It allows users to upload files to a dedicated search store and then ask questions about their content in a chat interface. The system automatically retrieves relevant information from the uploaded files to provide accurate, context-aware answers. Key Advantages 1. ✅ Seamless Integration of File Upload + AI Context The workflow automates the entire lifecycle: Upload file Index file Retrieve content for AI chat Everything happens inside one n8n automation, without manual actions. 2. ✅ Automatic Retrieval for Every User Query The AI agent is instructed to always query the Search Store. This ensures: More accurate answers Context-aware responses Ability to reference the exact content the user has uploaded Perfect for knowledge bases, documentation Q&A, internal tools, and support. 3. ✅ Reusable Search Store for Multiple Sessions Once created, the Search Store can be reused: Multiple files can be imported Many queries can leverage the same indexed data A sustainable foundation for scalable RAG operations. 4. ✅ Visual and Modular Workflow Design Thanks to n8n’s node-based flow: Each step is clearly separated Easy to debug Easy to expand (e.g., adding authentication, connecting to a database, notifications, etc.) 5. ✅ Supports Both Form Submission and Chat Messages The workflow is built with two entry points: A form for uploading files A chat-triggered entry point for RAG conversations Meaning the system can be embedded in multiple user interfaces. 6. ✅ Compliant and Efficient Interaction With Gemini APIs Your workflow respects the structure of Gemini’s File Search API: /fileSearchStores (create store) upload endpoint importFile endpoint generateContent with file search tools This ensures compatibility and future expandability. 7. ✅ Memory-Aware Conversations With the Memory Buffer node, the chat session preserves context across messages—providing a more natural and sophisticated conversational experience. How it Works STEP 1 - Create a new Search Store Triggered manually via the “Execute workflow” node, this step sends a request to the Gemini API to create a FileSearch Store, which acts as a private vector index for your documents. The store name is then saved using a Set node. This store will later be used for file import and retrieval. STEP 2 - Upload and import a file into the Search Store When the form is submitted (through the Form Trigger), the workflow: Accepts a file upload via the form. Uploads the file to Gemini using the /upload endpoint. Imports the uploaded file into the Search Store, making it searchable. This step ensures content is stored, chunked, and indexed so the AI model can retrieve relevant sections later. STEP 3 - RAG-enabled Chat with Google Gemini When a chat message is received: The workflow loads the Search Store identifier. A LangChain Agent is used along with the Google Gemini Chat Model. The model is configured to always use the SearchStore tool, so every user query is enriched by a search inside the indexed files. The system retrieves relevant chunks from your documents and uses them as context for generating more accurate responses. This creates a fully functioning RAG chatbot powered by Gemini. Set up Steps Before activating this workflow, you must complete the following configuration: Google Gemini API Credentials: Ensure you have a valid Google AI Studio API key. This key must be entered in all HTTP Request nodes (Create Store, Upload File, Import to Store, and SearchStore). Configure the Search Store: Manually trigger the "Create Store" node once via the "Execute Workflow" button. This will call the Gemini API to create a new File Search Store and return its resource name (e.g., fileSearchStores/my-store-12345). Copy this resource name and update the "Get Store" and "Get Store1" Set nodes. Replace the placeholder value fileSearchStores/my-store-XXX in both nodes with the actual name of your newly created store. Deploy Triggers: For production use, you should activate the workflow. This will generate public URLs for the "On form submission" node (for file uploads) and the "When chat message received" node (for the chat interface). These URLs can be embedded in your applications (e.g., a website or dashboard). Once these steps are complete, the workflow is ready. Users can start uploading files via the form and then ask questions about them in the chat. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Uri
Who is this for? Teams that collect documents in Dropbox (invoices, receipts, forms) and want instant Slack notifications with AI-extracted structured data - great for invoice processing, receipt tracking, or any document workflow where team visibility matters. What it does This template contains two connected flows: Scenario 1 - Upload: Polls a Dropbox folder every minute for new files, filters for recently added documents, downloads them, and uploads to DocuPipe for AI-powered extraction. Scenario 2 - Notify: When DocuPipe finishes extracting, the webhook fires, results are fetched, formatted into a readable Slack message with labeled fields, and posted to your channel. How to set up Install the DocuPipe community node via Settings > Community Nodes (self-hosted only) Connect your Dropbox account and set the folder path in the List Dropbox Folder node Sign up at docupipe.ai, then get your DocuPipe API key at app.docupipe.ai/settings/general Select an extraction schema in the Upload node Connect your Slack account and select the target channel Activate the workflow Requirements A DocuPipe account with an API key A Dropbox account A Slack workspace with bot permissions Self-hosted n8n (required for community nodes) Note: Requires the DocuPipe community node. Install via Settings > Community Nodes. Categories: Data & Storage, Communication
by Evoort Solutions
Analyze Webpages with Landing Page Analyzer AI & Generate Google Docs Reports (CRO) Description This workflow integrates the Landing Page Analyzer AI to automatically audit landing pages, format the insights into a conversion-focused report, and save it directly into Google Docs. It leverages the Landing Page Analyzer AIto grade your page, highlight strengths, and suggest improvements—all without manual steps. Nodes Explanation On form submission Captures the URL of the landing page entered by the user to trigger the workflow. Serves as the entry point to pass the URL to the Landing Page Analyzer AI. WebPage Analyzer (API Call via RapidAPI) Sends the URL to the Landing Page Analyzer AI for audit data. Retrieves key analytics: grade, score, suggestions, strengths, and conversion metrics. Reformat (Code Node) Converts the raw JSON from the Landing Page Analyzer AI into structured Markdown. Builds sections for grade, overall score, suggestions, strengths, and score breakdown. Upload In Google Docs Inserts the formatted Markdown report into a predefined Google Document. Ensures the audit output from the Landing Page Analyzer AI is saved and shareable. Benefits of This Workflow Hands-Free Audits: Automatically performs a landing page evaluation using the powerful **Landing Page Analyzer AI. Consistent, Professional Reports**: Standardized Markdown formatting ensures clarity and readability. Effortless Documentation**: Results are directly stored in Google Docs—no manual copying required. Scalable & Repeatable**: Ideal for continuous optimization across multiple pages or campaigns. Use Cases SEO & CRO Agencies: Quickly generate conversion audit reports using the **Landing Page Analyzer AI to optimize client landing pages at scale. Marketing Teams**: Automate weekly or campaign-based auditing of landing pages, with results logged in Google Docs for easy sharing and review. Freelancers & Consultants: Deliver polished, data-driven conversion reports to clients—powered by **Landing Page Analyzer AI via RapidAPI—without repetitive manual work. Growth Hackers & Product Managers**: Monitor iterations of landing pages over time; each version can be audited automatically and archived in Docs for comparison. 🔐 How to Get Your API Key for the Landing Page Analyzer AI API Go to 👉 Landing Page Analyzer AI Click "Subscribe to Test" (you may need to sign up or log in). Choose a pricing plan (there’s a free tier for testing). After subscribing, click on the "Endpoints" tab. Your API Key will be visible in the "x-rapidapi-key" header. 🔑 Copy and paste this key into the httpRequest node in your workflow. Conclusion This n8n workflow streamlines landing page optimization by leveraging the Landing Page Analyzer AI, transforming raw audit output into insightful, presentation-ready reports in Google Docs. Perfect for teams and individuals focused on data-driven improvements, scalability, and efficiency. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n
by Evoort Solutions
Automate YouTube Channel Metadata Extraction to Google Docs Description: This workflow leverages the powerful YouTube Metadata API to automatically extract detailed metadata from any YouTube channel URL. Using the YouTube Metadata API, it collects information like subscribers, views, keywords, and banners, reformats it for readability, and saves it directly to Google Docs for easy sharing and record-keeping. Ideal for marketers, content creators, and analysts looking to streamline YouTube channel data collection. By integrating the YouTube Metadata, this workflow ensures accurate and up-to-date channel insights fetched instantly from the source. Node-by-Node Explanation 1. On form submission Triggers the workflow when a user submits a YouTube channel URL via a web form, starting the metadata extraction process. 2. YouTube Channel Metadata (HTTP Request) Calls the YouTube Metadata API with the provided channel URL to retrieve comprehensive channel details like title, subscriber count, and banner images. 3. Reformat (Code) Transforms the raw API response into a clean, formatted string with emojis and markdown styling for easy reading and better presentation. 4. Add Data in Google Docs Appends the formatted channel metadata into a specified Google Docs document, providing a centralized and accessible record of the data. Benefits of This Workflow Automated Data Collection:* Eliminates manual effort by automatically extracting YouTube channel data via the *YouTube Metadata API**. Accurate & Reliable:** Ensures data accuracy by using a trusted API source, keeping metadata current. Improved Organization:** Saves data in Google Docs, allowing for easy sharing, editing, and collaboration. User-Friendly:** Simple form-based trigger lets anyone gather channel info without technical knowledge. Scalable & Flexible:** Can process multiple URLs easily, perfect for marketing or research teams handling numerous channels. Use Cases Marketing Teams:** Track competitor YouTube channel stats and trends for strategic planning. Content Creators:** Monitor channel growth metrics and optimize content strategy accordingly. Researchers:** Collect and analyze YouTube channel data for academic or market research projects. Social Media Managers:** Automate reporting by documenting channel performance metrics in Google Docs. Businesses:** Maintain up-to-date records of brand or partner YouTube channels efficiently. By leveraging the YouTube Metadata, this workflow provides an efficient, scalable solution to extract and document YouTube channel metadata with minimal manual input. 🔑 How to Get Your API Key for YouTube Metadata API Visit the API Page: Go to the YouTube Metadata on RapidAPI. Sign Up/Login: Create an account or log in if you already have one. Subscribe to the API: Click "Subscribe to Test" and choose a plan (free or paid). Copy Your API Key: After subscribing, your API Key will be available in the "X-RapidAPI-Key" section under "Endpoints". Use the Key: Include the key in your API requests like this: -H "X-RapidAPI-Key: YOUR_API_KEY" Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n
by Evoort Solutions
SEO On Page API – Complete Guide, Use Cases & Benefits The SEO On Page API is a powerful tool for keyword research, competitor analysis, backlink insights, and overall SEO optimization. With multiple endpoints, you can instantly gather actionable SEO data without juggling multiple tools. You can explore and subscribe via SEO On Page API. 📌 Description The SEO On Page API on SEO On Page API allows you to quickly analyze websites, keywords, backlinks, and competitors — all in one place. Ideal for SEO professionals, marketers, and developers who want fast, accurate, and easy-to-integrate data. Node-by-node Overview On form submission — Shows a web form (field: website) and triggers the workflow on submit. Global Storage — Copies website (and optional country) into the execution JSON for reuse. Website Traffic Cheker — POSTs website to webtraffic.php (RapidAPI) to fetch traffic summary. Re-Format — Extracts data.semrushAPI.trafficSummary[0] from the traffic API response. Website Traffic — Appends traffic metrics (visits, users, bounce, etc.) to the "WebSite Traffic" sheet. Website Metrics DA PA — POSTs website to dapa.php (RapidAPI) to get DA, PA, spam score, DR, org traffic. Re-Format 2 — Pulls the data object from the DA/PA API response for clean mapping. DA PA — Appends DA/PA and related fields into the "DA PA" sheet. Top Baclinks — POSTs website to backlink.php (RapidAPI) to retrieve backlink data. Re-Format 3 — Extracts data.semrushAPI.backlinksOverview (aggregate backlink metrics). Backlinks Overview — Appends overview metrics into the "Backlinks Overview" sheet. Re-Format 4 — Extracts detailed data.semrushAPI.backlinks (individual backlinks list). Backlinks — Appends each backlink row into the "Backlinks" sheet. Competitors Analysis — POSTs website to competitor.php (RapidAPI) to fetch competitors/data sets. Re-Format 5 — Flattens all array datasets under data.semrushAPI into rows with a dataset label. Competitor Analysis — Appends the flattened competitor and keyword rows into the "Competitor Analysis" sheet. 🚀 Use Cases Keyword Research** – Find high-volume, low-competition keywords for content planning. Competitor Analysis** – Identify competitor strategies and ranking keywords. Backlink Insights** – Discover referring domains and link-building opportunities. Domain Authority Checks** – Evaluate site authority before guest posting or partnerships. Content Optimization** – Improve on-page SEO using actionable data. 💡 Benefits One API, Multiple Insights** – No need for multiple SEO tools. Accurate Data** – Get trusted metrics for informed decision-making. Fast Integration** – Simple POST requests for quick setup. Time-Saving** – Automates complex SEO analysis in seconds. Affordable** – Access enterprise-grade SEO insights without breaking the bank. 📍 Start using the *SEO On Page API* today to supercharge your keyword research, backlink tracking, and competitor analysis. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n Save time, stay consistent, and grow your LinkedIn presence effortlessly!
by Growth AI
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Firecrawl batch scraping to Google Docs Who's it for AI chatbot developers, content managers, and data analysts who need to extract and organize content from multiple web pages for knowledge base creation, competitive analysis, or content migration projects. What it does This workflow automatically scrapes content from a list of URLs and converts each page into a structured Google Doc in markdown format. It's designed for batch processing multiple pages efficiently, making it ideal for building AI knowledge bases, analyzing competitor content, or migrating website content to documentation systems. How it works The workflow follows a systematic scraping process: URL Input: Reads a list of URLs from a Google Sheets template Data Validation: Filters out empty rows and already-processed URLs Batch Processing: Loops through each URL sequentially Content Extraction: Uses Firecrawl to scrape and convert content to markdown Document Creation: Creates individual Google Docs for each scraped page Progress Tracking: Updates the spreadsheet to mark completed URLs Final Notification: Provides completion summary with access to scraped content Requirements Firecrawl API key (for web scraping) Google Sheets access Google Drive access (for document creation) Google Sheets template (provided) How to set up Step 1: Prepare your template Copy the Google Sheets template Create your own version for personal use Ensure the sheet has a tab named "Page to doc" List all URLs you want to scrape in the "URL" column Step 2: Configure API credentials Set up the following credentials in n8n: Firecrawl API: For web content scraping and markdown conversion Google Sheets OAuth2: For reading URLs and updating progress Google Drive OAuth2: For creating content documents Step 3: Set up your Google Drive folder The workflow saves scraped content to a specific Drive folder Default folder: "Contenu scrapé" (Content Scraped) Folder ID: 1ry3xvQ9UqM2Rf9C4-AoJdg1lfB9inh_5 (customize this to your own folder) Create your own folder and update the folder ID in the "Create file markdown scraping" node Step 4: Choose your trigger method Option A: Chat interface Use the default chat trigger Send your Google Sheets URL through the chat interface Option B: Manual trigger Replace chat trigger with manual trigger Set the Google Sheets URL as a variable in the "Get URL" node How to customize the workflow URL source customization Sheet name: Change "Page to doc" to your preferred tab name Column structure: Modify field mappings if using different column names URL validation: Adjust filtering criteria for URL format requirements Batch size: The workflow processes all URLs sequentially (no batch size limit) Scraping configuration Firecrawl options: Add specific scraping parameters (wait times, JavaScript rendering) Content format: Currently outputs markdown (can be modified for other formats) Error handling: The workflow continues processing even if individual URLs fail Retry logic: Add retry mechanisms for failed scraping attempts Output customization Document naming: Currently uses the URL as document name (customizable) Folder organization: Create subfolders for different content types File format: Switch from Google Docs to other formats (PDF, TXT, etc.) Content structure: Add headers, metadata, or formatting to scraped content Progress tracking enhancements Status columns: Add more detailed status tracking (failed, retrying, etc.) Metadata capture: Store scraping timestamps, content length, etc. Error logging: Track which URLs failed and why Completion statistics: Generate summary reports of scraping results Use cases AI knowledge base creation E-commerce product pages: Scrape product descriptions and specifications for chatbot training Documentation sites: Convert help articles into structured knowledge base content FAQ pages: Extract customer service information for automated support systems Company information: Gather about pages, services, and team information Content analysis and migration Competitor research: Analyze competitor website content and structure Content audits: Extract existing content for analysis and optimization Website migrations: Backup content before site redesigns or platform changes SEO analysis: Gather content for keyword and structure analysis Research and documentation Market research: Collect information from multiple industry sources Academic research: Gather content from relevant web sources Legal compliance: Document website terms, policies, and disclaimers Brand monitoring: Track content changes across multiple sites Workflow features Smart processing logic Duplicate prevention: Skips URLs already marked as "Scrapé" (scraped) Empty row filtering: Automatically ignores rows without URLs Sequential processing: Handles one URL at a time to avoid rate limiting Progress updates: Real-time status updates in the source spreadsheet Error handling and resilience Graceful failures: Continues processing remaining URLs if individual scrapes fail Status tracking: Clear indication of completed vs. pending URLs Completion notification: Summary message with link to scraped content folder Manual restart capability: Can resume processing from where it left off Results interpretation Organized content output Each scraped page creates: Individual Google Doc: Named with the source URL Markdown formatting: Clean, structured content extraction Metadata preservation: Original URL and scraping timestamp Organized storage: All documents in designated Google Drive folder Progress tracking The source spreadsheet shows: URL list: Original URLs to be processed Status column: "OK" for completed, empty for pending Real-time updates: Progress visible during workflow execution Completion summary: Final notification with access instructions Workflow limitations Sequential processing: Processes URLs one at a time (prevents rate limiting but slower for large lists) Google Drive dependency: Requires Google Drive for document storage Firecrawl rate limits: Subject to Firecrawl API limitations and quotas Single format output: Currently outputs only Google Docs (easily customizable) Manual setup: Requires Google Sheets template preparation before use No content deduplication: Creates separate documents even for similar content
by Evoort Solutions
🚀 Automated Keyword Analysis with On-Page SEO Workflow 📌 Description Boost your SEO strategy by automating keyword research and on-page SEO analysis with n8n. This workflow uses user input (keyword + country), retrieves essential data using the powerful SEO On-Page API, and saves it directly into Google Sheets. Ideal for marketers, content strategists, and SEO agencies looking for efficiency. 🔁 Node-by-Node Flow explanation 1. 🟢 On form submission Triggers the workflow when a user submits a keyword and country via a simple form. 2. 📦 Global Storage Captures and stores the submitted keyword and country for use across the workflow. 3. 🌍 Keyword Insights Request Sends a POST request to the SEO On-Page API to fetch keyword suggestions (broad match keywords). 4. 🧾 Re-Format Extracts the relevant broadMatchKeywords array from the keyword API response. 5. 📊 Keyword Insights Appends extracted keyword suggestions into the "Keyword Insights" tab in Google Sheets. 6. 📉 KeyWord Difficulty Request Sends a second POST request to the SEO On-Page API to fetch keyword difficulty and SERP data. 7. 📈 Re-Format 2 Extracts the keywordDifficultyIndex value from the API response. 8. 📄 KeyWord Difficulty Saves the keyword difficulty score into the "KeyWord Difficulty" sheet for reference. 9. 🔍 Re -Format 5 Extracts SERP result data from the difficulty API response. 10. 🗂️ SERP Result Appends detailed SERP data into the "Serp Analytics" sheet in Google Sheets. 🎯 Benefits ✅ Fully Automated SEO Research – No manual data entry or API calls required. 🔁 Real-time Data Collection – Powered by SEO On-Page API on RapidAPI, ensuring fresh and reliable results. 📊 Organized Insights – Data is cleanly categorized into separate Google Sheets tabs. ⏱️ Time Saver – Instantly analyze keywords without switching between tools. 💡 Use Cases 📌 SEO Agencies – Generate keyword reports for clients automatically. 📝 Content Writers – Discover keyword difficulty and SERP competition before drafting. 🧑💻 Digital Marketers – Monitor keyword trends and search visibility in real-time. 📈 Bloggers & Influencers – Choose better keywords to rank faster on search engines. 🔗 API Reference This workflow is powered by the SEO On-Page API available on RapidAPI. It offers keyword research, difficulty metrics, and SERP analytics through simple endpoints, making it ideal for automation with n8n. > ⚠️ Note: Make sure to replace "your key" with your actual RapidAPI key in both HTTP Request nodes for successful API calls. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n Save time, stay consistent, and grow your LinkedIn presence effortlessly!
by Evoort Solutions
🔍 Analyze Competitor Keywords with RapidAPI and Google Sheets Reporting 📄 Description This n8n workflow streamlines the process of analyzing SEO competitor keywords using the Competitor Keyword Analysis API on RapidAPI. It collects a website and country via form submission, calls the API to retrieve keyword metrics, reformats the response, and logs the results into Google Sheets — all automatically. It is ideal for SEO analysts, marketing teams, and agencies who need a hands-free solution for competitive keyword insights. 🧩 Node-by-Node Explanation 📝 On form submission (formTrigger) Starts the workflow when a user submits their website and country through a form. 🌐 Competitor Keyword Analysis (httpRequest) Sends a POST request to the Competitor Keyword Analysis API on RapidAPI with form input to fetch keyword data. 🔄 Reformat Code (code) Extracts the domainOrganicSearchKeywords array from the API response for structured processing. 📊 Google Sheets (googleSheets) Appends the cleaned keyword metrics into a Google Sheet for easy viewing and tracking. 🚀 Benefits of This Workflow ✅ Automates SEO research using the Competitor Keyword Analysis API. ✅ Eliminates manual data entry — results go straight into Google Sheets. ✅ Scalable and reusable for any number of websites or countries. ✅ Reformatting logic is built-in, so you get clean, analysis-ready data. 💼 Use Cases Marketing Agencies Use the Competitor Keyword Analysis API to gather insights for client websites and store the results automatically. In-house SEO Teams Quickly compare keyword performance across competitors and monitor shifts over time with historical Google Sheets logs. Freelancers and Consultants Provide fast, data-backed SEO reports using this automation with the Competitor Keyword Analysis API. Keyword Research Automation Make this flow part of a larger system for identifying keyword gaps, content opportunities, or campaign ideas. 📁 Output Example (Google Sheets) | keyword | searchVolume | cpc | competition | position | previousPosition | keywordDifficulty | |---------------|--------------|-----|-------------|----------|------------------|-------------------| | best laptops | 9900 | 2.3 | 0.87 | 5 | 7 | 55 | 🔐 How to Get Your API Key for the Competitor Keyword Analysis API Go to 👉 Competitor Keyword Analysis API - RapidAPI Click "Subscribe to Test" (you may need to sign up or log in). Choose a pricing plan (there’s a free tier for testing). After subscribing, click on the "Endpoints" tab. Your API Key will be visible in the "x-rapidapi-key" header. 🔑 Copy and paste this key into the httpRequest node in your workflow. ✅ Summary This workflow is a powerful no-code automation tool that leverages the Competitor Keyword Analysis API on RapidAPI to deliver real-time SEO insights directly to Google Sheets — saving time, boosting efficiency, and enabling smarter keyword strategy decisions. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n
by Evoort Solutions
📊 GST Data Analytics Automation Flow with Google Docs Reporting Description: Streamline GST data collection, analysis, and automated reporting using the GST Insights API and Google Docs integration. This workflow allows businesses to automate the extraction of GST data and directly generate formatted reports in Google Docs, making compliance easier. ⚙️ Node-by-Node Explanation On form submission Triggers the automation whenever a user submits the GST-related data (like GSTIN) via a web form. It collects all necessary input for further processing in the workflow. Fetch GST Data Using GST Insights API Sends a request to the GST Insights API to fetch GST data based on the user's input. This is done via a POST request that includes the required authentication and the inputted GSTIN. Data Reformatting This node processes and structures the raw GST data received from the API. The reformatting ensures only the essential information (e.g., tax summaries, payment status, etc.) is extracted for reporting. Google Docs Reporting Generates a Google Docs document and auto-populates it with the reformatted GST data. The report is structured in a clean format, ready for sharing or downloading. 💡 Use Cases Tax Consultants & Agencies:** Automate the GST insights and reporting process for clients by extracting key metrics directly from the GST Insights API. Accountants & Auditors:** Streamline GST compliance by generating automated reports based on the most current data from the API. E-commerce Platforms:** Automatically track GST payments, returns, and summaries for each sale and consolidate them into structured reports. SMEs and Startups:** Track your GST status and compliance without the need for manual intervention. Generate reports directly within Google Docs for easy access. 🎯 Benefits of this Workflow Automated GST Data Collection:* Fetch GST insights directly using the *GST Insights API** without manually searching through different resources. Google Docs Integration:** Automatically generate customized Google Docs reports with detailed GST data, making the reporting process efficient. Error-Free Data Analysis:** Automates data extraction and reporting, significantly reducing the risk of human errors. Customizable Reporting:** Customize the flow for various GST-related data such as payments, returns, and summaries. Centralized Document Storage:** All GST reports are saved and managed within Google Docs, ensuring easy collaboration and access. Quick Note: The GST Insights API provides detailed GST data analysis for Indian businesses. It can extract crucial data like returns, payments, and summaries directly from the GST system, which you can then use for compliance and reporting. Would you like to explore the API further or need help with other integrations? 🔑 How to Get Your API Key for GST Insights API Visit the API Page: Go to the GST Insights API on RapidAPI. Sign Up/Login: Create an account or log in if you already have one. Subscribe to the API: Click "Subscribe to Test" and choose a plan (free or paid). Copy Your API Key: After subscribing, your API Key will be available in the "X-RapidAPI-Key" section under "Endpoints". Use the Key: Include the key in your API requests like this: -H "X-RapidAPI-Key: YOUR_API_KEY"