by InfraNodus
Optimize Your Top Performing Website Content with Google Analytics, Firecrawl, and InfraNodus This templates helps you extract** the top performing pages from your website using Google Analytics scrape** the content of the pages using Firecrawl API (HTTP node provided) build a knowledge graph* for all these pages with the *topics* and *gaps** identified using InfraNodus understand the main concepts and topical clusters in your top-performing content, so you can create more of it, while also identifying the content gaps — structural holes between the topics that you can use to generate new content ideas have access to a knowledge graph visualization of your top performing content to explore it using the interactive network interface How it works This template uses the InfraNodus to visualize and analyze your top performing content. It will extract the top pages from the Google Analytics data for the website you choose and scrape their text content using the high-quality Firecrawl API. Then it will ingest every page into an InfraNodus graph you specify. The graph can be used to explore the content visually. The insights from the graph, such as the main topics and gaps between them will be shown to you in the end of the workflow. You can use these insights to understand what kind of content you should focus on creating to get the highest number of views* and to establish *topical authority* in your area, which is good for *SEO* and *LLM optimization** — focusing on the topics identified in the top content discover the content gaps — which topics are not connected yet that you could link with new content ideas and publish — this caters to your audience's interests, but connects your existing ideas in a new way. So you deliver the content that's relevant but also novel. Here's a description step by step: Note:* you can replace the PDF to Text convertor node with a better quality *PDF convertor* from ConvertAPI which respects the original file layout and doesn't split text into small chunks Trigger the workflow Extract a list of top (25, 50) pages from your Google Analytics account (you'll need to connect it via the Google Cloud API) Fix the extracted data and add a correct URL prefix to each page (if your Analytics has relative paths only Loop through each page extracted Extract the text content of every page using the high-quality Firecrawl API Ingest the text content into the InfraNodus graph that you specify Once all the pages are ingested into the InfraNodus graph, access the AI insights endpoint in InfraNodus and get the information about the main topics and gaps Display this information to the user How to use You need an InfraNodus API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Requirements An InfraNodus account and API key Optional: A Google Analytics account for your property (alternatively, you can modify this workflow to provide a list of the most popular pages) Optional: A Google Cloud API access (to access the data from Google Analytic saccount — follow the n8n instructions) Optional: A Firecrawl API key API key for better quality web page scraping (otherwise, use the standard HTTP to Text node from n8n) Customizing this workflow You can customize this workflow by using a list of the URL pages you want to analyze from a Google sheet. Alternatively, you can use the Google SERP node to extract top search results for a query and get the main topics for them. For support and feedback, please, contact us at https://support.noduslabs.com To learn more about InfraNodus: https://infranodus.com
by Stefan
Track n8n Node Definitions from GitHub and Export to Google Sheets Overview This workflow automatically retrieves and processes metadata from the official n8n GitHub repository, filters all available .node.json files, parses their structure, and appends structured information into a Google Sheet. Perfect for developers, community managers, and technical writers who need to maintain up-to-date information about n8n's evolving node ecosystem. Setup Instructions Prerequisites Before setting up this workflow, ensure you have: A GitHub account with API access A Google account with Google Sheets access An active n8n instance (cloud or self-hosted) Step 1: GitHub API Configuration Navigate to GitHub Settings → Developer Settings → Personal Access Tokens Generate a new token with public_repo permissions Copy the generated token and store it securely In n8n, create a new "GitHub API" credential Paste your token in the credential configuration and save Step 2: Google Sheets Setup Create a new Google Sheets document Set up the following column headers in the first row: node (Column A) - Node identifier/name nodeVersion (Column B) - Version of the node codexVersion (Column C) - Codex version number categories (Column D) - Node categories credentialDocumentation (Column E) - Credential documentation URL primaryDocumentation (Column F) - Primary documentation URL Note down the Google Sheets document ID from the URL Configure Google Sheets OAuth2 credentials in n8n Step 3: Workflow Configuration Import the workflow into your n8n instance Update the following placeholder values: Replace YOUR_GOOGLE_SHEETS_DOCUMENT_ID with your actual document ID Replace YOUR_WEBHOOK_ID if using webhook functionality Configure the GitHub API credentials in the HTTP Request nodes Set up Google Sheets credentials in the Google Sheets nodes Share your Google Sheets document with the email address associated with your Google OAuth2 credentials Grant "Editor" permissions to allow the workflow to write data Google Sheets Template Details The workflow creates a structured dataset with these columns: node**: Node identifier (e.g., n8n-nodes-base.slack) nodeVersion**: Version of the node (e.g., 1.0.0) codexVersion**: Codex version number (e.g., 1.0.0) categories**: Node categories (e.g., Communication, Productivity) credentialDocumentation**: URL to credential documentation primaryDocumentation**: URL to primary node documentation Customization Options Modifying Data Extraction You can customize the "Format Data" node to extract additional fields: Add new assignments in the Set node Modify the column mapping in the Google Sheets node Update your spreadsheet headers accordingly Changing Update Frequency To run this workflow on a schedule: Replace the Manual Trigger with a Cron node Set your desired schedule (e.g., daily, weekly) Configure appropriate timing to avoid API rate limits Adding Filters Customize the "Filter Node Files" code node to: Filter specific node types Include/exclude certain categories Process only recently updated nodes Features Fetches all node definitions from the n8n-io/n8n repository Filters for .node.json files only Downloads and parses metadata automatically Extracts key fields like node names, versions, categories, and documentation URLs Appends structured data to Google Sheets with batch processing Includes error handling and retry mechanisms Clears existing data before appending new information for fresh results Use Cases This workflow is ideal for: Track changes in official n8n node definitions over time Audit node categories and documentation links for completeness Build custom dashboards from node metadata Community management and documentation maintenance Integration planning and compatibility analysis
by HoangSP
Name: AI-Powered Research Agent using Perplexity Sonar Description: This workflow acts as an AI-powered research assistant using the Perplexity Sonar model. When triggered by another workflow, it sends a user-defined prompt to the Perplexity API to retrieve up-to-date search results. The response is then parsed into a clean format for downstream processing. How it Works: Trigger: Activated from another workflow via Execute Workflow Trigger. Prompt Setup: Sets a system role message and user query dynamically. API Call: Sends a POST request to Perplexity's /chat/completions endpoint with your credentials. Response Handling: Extracts the message content from the API response. Output: Returns the result, ready for display or further processing. Requirements: A Perplexity AI API Key Set up authentication via Header Auth with Bearer token Ensure your account allows outbound HTTP requests in n8n Customization Tips: Modify the system prompt to suit your research domain Chain this workflow with other automation like blog creation, summaries, etc. Replace the output handling logic to fit into Google Sheets, Notion, or Telegram
by Artem Boiko
Revit to HTML Quantity Takeoff Generator Automates extraction of wall quantities from Revit models and creates a professional interactive HTML report. Key Features Automated wall quantity analysis Calculates volumes by wall type ("Type Name") Generates interactive HTML QTO report Includes summary statistics: total elements, total and average volumes Provides detailed breakdown by element type How it works Upload a Revit file as input Workflow extracts wall quantities and types Creates and saves a ready-to-share HTML dashboard with QTO data No API keys required Runs offline Output is a professional, ready-to-use HTML report
by IvanCore
Disclaimer: This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Important distinction: This template manages Telegram Copilot's UserBots (client accounts), not Telegram Bots. UserBot vs. Bot: Key Differences 🔹 Telegram Copilot's UserBots Authenticate as real user accounts (phone number required) Can join groups/channels without "Bot" label Subject to Telegram's client API limits Require manual login (MFA supported) 🔹 Telegram Bots Use @BotFather-created tokens Limited to bot API functionality Can't initiate chats with unbidden users No phone number required This template solves the unique challenges of UserBot management through: Core Functionality 🛡️ Session Reliability Automatic crash recovery (5-step restart sequence) Persistent session monitoring (checks every 6h) Database cleanup via /clear command 📱 Multi-Device Support Manages sessions independently from mobile clients Tracks active devices via /stat command Isolates session data per credential 🔔 Smart Notifications Real-time alerts to admin chat Detailed error context with authState snapshots Success confirmations with session metadata Setup Guide Prerequisites Self-hosted n8n instance (community node required) Valid Telegram account for UserBot Telegram bot token for notifications TelePilot credentials with api_id/api_hash Configuration Steps Credential Setup Add TelePilot credentials in n8n Configure Telegram bot token in notification nodes Set admin chat ID for alerts Monitoring Customization Adjust check frequency in Schedule Trigger Modify alert thresholds in Filter nodes Configure retry logic in recovery sequence Session Management Test /start command flow Verify /stat output format Confirm notification delivery Workflow Customization Advanced Options Add secondary notification channels (Email, Slack) Implement escalating alert system Integrate with monitoring dashboards Customize recovery attempt limits Compliance Notes UserBots must comply with Telegram's Terms of Service Not intended for bulk messaging or spam Recommended for legitimate automation use cases Note: UserBots must comply with Telegram ToS. Not for spam/mass messaging. Why This Matters: UserBots enable automation scenarios impossible with regular bots (e.g., group management as normal user, reacting as human account). This workflow keeps them reliably online 24/7.
by simonscrapes
Use Case Automate image replacement in Google Docs: You need to update document images dynamically You want to create multiple versions of a template with different images You need to batch process document images from a URL database You want to generate shareable documents with custom images What this Workflow Does The workflow automates image replacement in Google Docs: Accepts image URLs from your database Finds and replaces images in template documents Creates new document copies with updated images Optionally converts to PDF and makes documents shareable Setup Connect your image URL database (column name must be "url") Set up Google Docs OAuth 2 API credentials Optional: Create a template document in Google Drive with placeholder images Optional: Configure Google Drive authentication for additional features How to Adjust it to Your Needs Remove template copying for single document processing Adjust image ID selection for documents with multiple images Configure sharing settings and download formats Customize file naming and storage location More templates and n8n workflows >>> @simonscrapes
by Humble Turtle
Manage Jira Issues with Natural Language via Telegram and GPT-4o Overview The Jira Agent is an AI-powered assistant that allows users to interact with Jira directly through messaging platform Telegram. It leverages OpenAI's GPT-4o model to interpret natural language commands and perform various Jira-related actions. On Telegram, it enables users to create Jira stories by triggering a guided form when prompted with "create story." Additionally, it provides more extensive functionality, including creating, updating, searching, and transitioning Jira issues through natural language commands. How it works Normal interaction Using messages as "Please give all my issues". Standardized process of creating stories: Message: "create story" Open the Form that Telegram responds back to you Fill in the essential story information in the form The story automatically gets created in your backlog. Required Connections To use the Jira Agent effectively, users need access to: A Telegram account, Telegram setup involves deploying the bot and starting a chat; story creation is triggered with a simple text command. A connected Jira workspace Permissions to create and modify Jira issue Access to GPT-4o API-key Detailed configuration instructions are provided in the workflow Setup Time <15 minutes Customising this workflow Try adding more details to the form for more complete Jira ticket creation. Try connecting a Google Calendar node to plan your work
by darrell_tw
How it works Receive a chat input as an image prompt. Call OpenAI's gpt-image-1 API to generate an image. Split the returned images and process them one by one. Upload each generated image to Google Drive. Save image links and thumbnails to a Google Sheets document. Record token usage and estimated cost into a separate sheet. Set up steps Connect your OpenAI API credentials for image generation. Connect your Google Drive and Google Sheets accounts. Set the destination folder in Google Drive. Set the target Google Sheet and specify the correct sheet tabs. The setup usually takes around 5-10 minutes. Detailed field mappings are already pre-configured inside the workflow. Additional tips and instructions are included as sticky notes inside the workflow. Google Sheet copy url Copy Sheet Link
by Matthieu
Search LinkedIn companies and add them to Airtable CRM Who is this for? This template is ideal for sales teams, business development professionals, and marketers looking to build a robust prospect database without manual LinkedIn research. Perfect for agencies, consultants, and B2B companies targeting specific business profiles. What problem does this workflow solve? Manually researching companies on LinkedIn and adding them to your CRM is time-consuming and error-prone. This automation eliminates the tedious process of finding, qualifying, and importing prospects into your database. What this workflow does This workflow automatically searches for companies on LinkedIn based on your criteria (keywords, size, location), retrieves detailed information about each company, filters them based on quality indicators (follower count and website availability), and adds new companies to your Airtable CRM while preventing duplicates. Setup Create a Ghost Genius API account and get your API key Configure HTTP Request nodes with Header Auth credentials (Name: "Authorization", Value: "Bearer your_api_key") Create an Airtable base named "CRM" with columns: name, website, LinkedIn, id, etc. Set up your Airtable credentials following n8n documentation Add your company search selection criteria to the “Set Variables” node. How to customize this workflow Modify search parameters in the "Set Variables" node to target different industries, locations, or company sizes Adjust the follower count threshold in the "Filter Valid Companies" node based on your qualification criteria Customize the Airtable fields mapping in the "Add Company to CRM" node to match your database structure Add notification nodes (Slack, Email) to alert you when new companies are added
by Leandro Melo
Keep your Hostinger VPS servers secure with automated backups! This n8n (self-hosted) workflow for is designed to create daily snapshots and send server metrics effortlessly, ensuring you always have an up-to-date recovery copy. Key Features: ✅ Automated Snapshots: Daily execution with zero manual intervention. ✅ Smart Replacement: Hostinger allows only 1 snapshot per VPS—the workflow automatically replaces the previous one. ✅ Notifications: Alerts via WhatsApp (Evolution API) or other configurable channels for execution confirmation. Quick Setup: Prerequisites: Install the Community Node n8n-nodes-hostinger-api and n8n-nodes-evolution-api in your n8n instance. Generate a Hostinger API Key in their dashboard: hpanel.hostinger.com/profile/api. Workflow Configuration: Add the Hostinger API credential in the first node and reuse it across the workflow. Customize the schedule (e.g., daily at 2 AM) and notification method (Evolution API for WhatsApp, email, etc.). Important Note: Hostinger overwrites the previous snapshot with each new execution, keeping only the latest version. VPS Metrics avaliables (send in messages): 🔹Status: snapshot status 🔹Date: snapshot date time 🔹Server: server name 🔹IP: external server IP ⚙️ Métrics: 🔹 Number of vCPUs 🔹 Ram usage / avaliable 🔹 Hard Disk usage / avaliable 🔹 Operational Sys and version 🔹 Uptime time (days, hours)
by Custom Workflows AI
Introduction The Content SEO Audit Workflow is a powerful automated solution that generates comprehensive SEO audit reports for websites. By combining the crawling capabilities of DataForSEO with the search performance metrics from Google Search Console, this workflow delivers actionable insights into content quality, technical SEO issues, and performance optimization opportunities. The workflow crawls up to 1,000 pages of a website, analyzes various SEO factors including metadata, content quality, internal linking, and search performance, and then generates a professional, branded HTML report that can be shared directly with clients. The entire process is automated, transforming what would typically be hours of manual analysis into a streamlined workflow that produces consistent, thorough results. This workflow bridges the gap between technical SEO auditing and practical, client-ready deliverables, making it an invaluable tool for SEO professionals and digital marketing agencies. Who is this for? This workflow is designed for SEO consultants, digital marketing agencies, and content strategists who need to perform comprehensive content audits for clients or their own websites. It's particularly valuable for professionals who: Regularly conduct SEO audits as part of their service offerings Need to provide branded, professional reports to clients Want to automate the time-consuming process of content analysis Require data-driven insights to inform content strategy decisions Users should have basic familiarity with SEO concepts and metrics, as well as a basic understanding of how to set up API credentials in n8n. While no coding knowledge is required to run the workflow, users should be comfortable with configuring workflow parameters and following setup instructions. What problem is this workflow solving? Content audits are essential for SEO strategy but are traditionally labor-intensive and time-consuming. This workflow addresses several key challenges: Manual Data Collection: Gathering data from multiple sources (crawlers, Google Search Console, etc.) typically requires hours of work. This workflow automates the entire data collection process. Inconsistent Analysis: Manual audits can suffer from inconsistency in methodology. This workflow applies the same comprehensive analysis criteria to every page, ensuring thorough and consistent results. Report Generation: Creating professional, client-ready reports often requires additional design work after the analysis is complete. This workflow generates a fully branded HTML report automatically. Data Integration: Correlating technical SEO issues with actual search performance metrics is difficult when working with separate tools. This workflow seamlessly integrates crawl data with Google Search Console metrics. Scale Limitations: Manual audits become increasingly difficult with larger websites. This workflow can efficiently process up to 1,000 pages without additional effort. What this workflow does Overview The Content SEO Audit Workflow crawls a specified website, analyzes its content for various SEO issues, retrieves performance data from Google Search Console, and generates a comprehensive HTML report. The workflow identifies issues in five key categories: status issues (404 errors, redirects), content quality (thin content, readability), metadata SEO (title/description issues), internal linking (orphan pages, excessive click depth), and performance (underperforming content). The final report includes executive summaries, detailed issue breakdowns, and actionable recommendations, all branded with your company's colors and logo. Process Initial Configuration: The workflow begins by setting parameters including the target domain, crawl limits, company information, and branding colors. Website Crawling: The workflow creates a crawl task in DataForSEO and periodically checks its status until completion. Data Collection: Once crawling is complete, the workflow: Retrieves the raw audit data from DataForSEO Extracts all URLs with status code 200 (successful pages) Queries Google Search Console API for each URL to get clicks and impressions data Identifies 404 and 301 pages and retrieves their source links Data Analysis: The workflow analyzes the collected data to identify issues including: Technical issues: 404 errors, redirects, canonicalization problems Content issues: thin content, outdated content, readability problems SEO metadata issues: missing/duplicate titles and descriptions, H1 problems Internal linking issues: orphan pages, excessive click depth, low internal links Performance issues: underperforming pages based on GSC data Report Generation: Finally, the workflow: Calculates a health score based on the severity and quantity of issues Generates prioritized recommendations Creates a comprehensive HTML report with interactive tables and visualizations Customizes the report with your company's branding Provides the report as a downloadable HTML file Setup To set up this workflow, follow these steps: Import the workflow: Download the JSON file and import it into your n8n instance. Configure DataForSEO credentials: Create a DataForSEO account at https://app.dataforseo.com/api-access (they offer a free $1 credit for testing) Add a new "Basic Auth" credential in n8n following the HTTP Request Authentication guide Assign this credential to the "Create Task", "Check Task Status", "Get Raw Audit Data", and "Get Source URLs Data" nodes Configure Google Search Console credentials: Add a new "Google OAuth2 API" credential following the Google OAuth guide Ensure your Google account has access to the Google Search Console property you want to analyze Assign this credential to the "Query GSC API" node Update the "Set Fields" node with: dfs_domain: The website domain you want to audit dfs_max_crawl_pages: Maximum number of pages to crawl (default: 1000) dfs_enable_javascript: Whether to enable JavaScript rendering (default: false) company_name: Your company name for the report branding company_website: Your company website URL company_logo_url: URL to your company logo brand_primary_color: Your primary brand color (hex code) brand_secondary_color: Your secondary brand color (hex code) gsc_property_type: Set to "domain" or "url" depending on your Google Search Console property type Run the workflow: Click "Start" and wait for it to complete (approximately 20 minutes for 500 pages). Download the report: Once complete, download the HTML file from the "Download Report" node. How to customize this workflow to your needs This workflow can be adapted in several ways to better suit your specific requirements: Adjust crawl parameters: Modify the "Set Fields" node to change: The maximum number of pages to crawl (dfs_max_crawl_pages). This workflow supports up to 1000 pages. Whether to enable JavaScript rendering for JavaScript-heavy sites (dfs_enable_javascript) Customize issue detection thresholds: In the "Build Report Structure" code node, you can modify: Word count thresholds for thin content detection (currently 1500 words) Click depth thresholds (currently flags pages deeper than 4 clicks) Title and description length parameters (currently 40-60 chars for titles, 70-155 for descriptions) Readability score thresholds (currently flags Flesch-Kincaid scores below 55) Modify the report design: In the "Generate HTML Report" code node, you can: Adjust the HTML/CSS to change the report layout and styling Add or remove sections from the report Change the recommendations logic Modify the health score calculation algorithm Add additional data sources: You could extend the workflow by: Adding Pagespeed Insights data for performance metrics Incorporating backlink data from other APIs Adding keyword ranking data from rank tracking APIs Implement automated delivery: Add nodes after the "Download Report" to: Send the report directly to clients via email Upload it to cloud storage Create a PDF version of the report
by Joseph
📄 Google Script Workflow: Upload File from URL to Google Drive (via n8n) 🔧 Purpose: This lightweight Google Apps Script acts as a server endpoint that receives a file URL (from n8n), downloads the file, uploads it to your specified Google Drive folder, and responds with the file’s metadata (like Drive file ID and URL). This is useful for large video/audio files that n8n cannot handle directly via HTTP Download nodes. 🚀 Setup Steps: 1. Create a New Script Project Go to https://script.google.com Click “New Project” Rename the project to something like: DriveUploader 2. Paste the Script Code Replace the default Code.gs content with the following (your custom script): function doPost(e) { const SECRET_KEY = 'your-strong-secret-here'; // Set your secret key here try { const data = JSON.parse(e.postData.contents); // 🔒 Check for correct secret key if (!data.secret || data.secret !== SECRET_KEY) { return ContentService.createTextOutput("Unauthorized") .setMimeType(ContentService.MimeType.TEXT); } const videoUrl = data.videoUrl; const folderId = 'YOUR_FOLDER_ID_HERE'; // Replace with your target folder ID const folder = DriveApp.getFolderById(folderId); const response = UrlFetchApp.fetch(videoUrl); const blob = response.getBlob(); const file = folder.createFile(blob); file.setName('uploaded_video.mp4'); // You can customize the name return ContentService.createTextOutput(file.getUrl()) .setMimeType(ContentService.MimeType.TEXT); } catch (err) { return ContentService.createTextOutput("Error: " + err.message) .setMimeType(ContentService.MimeType.TEXT); } } 3. Generate & Set Up Secret Key To allow authorized post requests to your script only, we need to generate a secret key from aany reliable key generator. You can head over to acte, click generate and copy the "Encryption key 256". Paste it in the 'your-strong-secret-here' placeholder in your script then click save const SECRET_KEY = 'your-strong-secret-here'; // Set your secret key here; 4. Replace Folder ID in Code Open the target Drive folder in your browser The folder ID is the part of the URL after /folders/ Example: https://drive.google.com/drive/u/0/folders/1Xabc12345678defGHIJklmn Paste that ID in the script: var folderId = "1Xabc12345678defGHIJklmn"; 5. Set Up Deployment as Web App Click “Deploy” > “Manage Deployments” > “New Deployment” Under Select type, choose Web app Description: Upload from URL to Drive Execute as: Me Who has access: Anyone Click Deploy Authorize the script when prompted Copy the Web App URL 📤 How to Use in n8n 1. HTTP Request Node Method: POST URL: (your web app URL) Secret Key: (Secret Key set in script) Body Content Type: JSON Paste code: { "videoUrl": "https://example.com/path/to/your.mp4", "secret": "your-strong-secret-here" } videoUrl: The file download URL secret: The generated and set up secret key 2. Rename Node A simple drive update node to rename the file using the file drive url returned from the script.