by System Admin
Tagged with: Published Template
by automedia
Automated Blog Monitoring System with RSS Feeds and Time-Based Filtering Overview This workflow provides a powerful yet simple foundation for monitoring blogs using RSS feeds. It automatically fetches articles from a list of your favorite blogs and filters them based on their publication date, separating new content from old. It is the perfect starting point for anyone looking to build a custom content aggregation or notification system without needing any API keys. This template is designed for developers, hobbyists, and marketers who want a reliable way to track new blog posts and then decide what to do with them. Instead of including a specific final step, this workflow intentionally ends with a filter, giving you complete freedom to add your own integrations. Use Cases Why would you need to monitor and filter blog posts? Build a Custom News Feed: Send new articles that match your interests directly to a Discord channel, Slack, or Telegram chat. Power a Newsletter: Automatically collect links and summaries from industry blogs to curate your weekly newsletter content. Create a Social Media Queue: Add new, relevant blog posts to a content calendar or social media scheduling tool like Buffer or Hootsuite. Archive Content: Save new articles to a personal database like Airtable, Notion, or Google Sheets to build a searchable knowledge base. How It Works Manual Trigger: The workflow starts when you click "Execute Workflow". You can easily swap this for a Schedule Trigger to run it automatically. Fetch RSS Feeds: It reads a list of RSS feed URLs that you provide in the "blogs to track" node. Process Each Feed: The workflow loops through each RSS feed individually. Filter by Date: It checks the publication date of every article and compares it to a timeframe you set (default is 60 days). Split New from Old: New articles are sent down the true path of the "Filter Out Old Blogs" node. Old articles are sent down the false path. This workflow leaves the true path empty so you can add your desired next steps. Setup and Customization This workflow requires minimal setup and is designed for easy customization. Add Your Blog Feeds: Find the "blogs to track" node. In the source_identifier field, replace the example URLs with the RSS feeds you want to monitor. // Add your target RSS feed URLs in this array ['https://blog.n8n.io/rss', 'https://zapier.com/blog/feeds/latest/'] Set the Time Filter: Go to the "max\_content\_age\_days" node. Change the value from the default 60 to your desired number of days. For example, use 7 to only get articles published in the last week. Customize Your Output (Required Next Step): This is the most important part\! Drag a new node and connect it to the true output of the "Filter Out Old Blogs" node. Example Idea: To save new articles to a Google Sheet, add a Split In Batches node followed by a Google Sheets node to append each new article as a new row.
by Mohammadreza azari
Find Cannibalized Pages (Google Search Console) This n8n template helps you detect page cannibalization in Google Search Console (GSC): situations where multiple pages on your site rank for the same query and more than one page gets clicks. Use it to spot competing URLs, consolidate content, improve internal linking, and protect your CTR/rankings. Good to know Data source:** Google Search Console Search Analytics (Dimensions: query, page). Scope:* Defaults to *last 12 months* and up to *10,000 rows** per run (adjustable). Logic:* Keeps only queries with *>1 page* and where the *second page has clicks > 0** → higher confidence of true cannibalization. Privacy:** Template ships with a placeholder property (sc-domain:example.com) and a neutral credential name; replace both after import. Cost:** n8n nodes used here are free. GSC usage is also free (subject to Google limits). How it works Manual Start — run the workflow on demand. Google Search Console — fetch last 12 months of query–page rows. Summarize — group by query, building two arrays: appended_page[] → all pages seen for that query appended_clicks[] → clicks for each page-query row (aligned with appended_page) Filter — pass only queries where: count_query > 1 (more than one page involved), and appended_clicks[1] > 0 (the second page also received clicks) Output — list of cannibalized queries with the competing pages and their click counts. Example output { "query": "best running shoes", "appended_page": [ "https://example.com/blog/best-running-shoes", "https://example.com/guide/running-shoes-2025" ], "appended_clicks": [124, 37], "count_query": 3 } How to use Import the JSON into n8n. Open the Google Search Console node and: Connect your Google Search Console OAuth2 credential. Replace siteUrl with your property (sc-domain:your-domain.com). Press Execute Workflow on Manual Start. Review the output — focus on queries where the second page has meaningful clicks. 💡 Tip: If your site is large, start with a shorter date range (e.g., 90 days) or raise rowLimit. Requirements Access to the target property in Google Search Console. One Google Search Console OAuth2 credential in n8n. Customising this workflow More robust detection:* In the *Summarize* node, change clicks aggregation from append to sum. Then filter for “at least 2 pages with sum_clicks > 0*” to avoid any dependency on row order. Scoring & sorting:* Add a *Code/Function** node to sort competing pages by clicks or impressions and compute click-share per page. Deeper analysis:** Include impressions and position in the GSC node and extend the summary to prioritize fixes (e.g., high impressions + split clicks). Reporting:* Send results to *Google Sheets* or export a *CSV**; create a dashboard of top cannibalized queries. Thresholds:* Expose minimum click thresholds as *workflow variables** (e.g., second page clicks ≥ 3) to reduce noise. Troubleshooting Empty results:** Widen date range, increase rowLimit, or temporarily relax the filter (remove the second-page click condition to validate data flow). No property data:** Ensure you used sc-domain: vs. https:// property format correctly and that your user has GSC access. Credential issues:** Reconnect the OAuth2 credential and reauthorize if needed.
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by Dot
What's the problem? Imagine you want to automate a task where, based on a TikTok video link, you must retrieve the username of the creator of that video. Many people may think that it's enough to get the "@" part of the link but that's not the case always. TikTok's iOS and Android app have specific link formats that are easier to share with others but, at the same time, it makes our task of retrieving creators way harder. Our solution: In solution to this problem, this simple workflow makes a HTTP protocol request to retrieve the original link of the video hosted on www.tiktok.com instead of the default mobile app's subdomain vm.tiktok.com. Then, we can in fact remove the attributes of the link and extract the handle correctly. Good things to know: Note that we extract the username (and not the profile's nickname) without the "@". Once we have our username, we can simply access to their profile from then on using "https://www.tiktok.com/@{{ $json.username }}".
by Sarfaraz Muhammad Sajib
This workflow is designed to validate and fetch information about a card using the BIN code. It utilizes apilayer's BIN Check API and provides details like the card brand, type, issuing bank, and country. Prerequisites: An apilayer account API Key for the BIN Check API Steps in n8n: Step 1: Manual Trigger Node Type: Manual Trigger Purpose: Starts the workflow manually Step 2: Set BIN Code and API Key Node Type: Set Fields to set: bin_code: A sample BIN like JH4KA7560RC003647 apikey: Your apilayer API key Step 3: HTTP Request Node Type: HTTP Request Method: GET URL: https://api.apilayer.com/bincheck/{{ $json.bin_code }} Headers: Name: apiKey Value: {{ $json.apikey }} (Optional) Step 4: Handle the Output Add nodes to store, parse, or visualize the API response. Expected Output: The response from apilayer contains detailed information about the provided BIN: Card scheme (e.g., VISA, MasterCard) Type (credit, debit, prepaid) Issuing bank Country of issuance Example Use Case: Use this to build a fraud prevention microservice, pre-validate card data before sending to payment gateways, or enrich card-related logs.
by System Admin
Tagged with: , , , ,
by Davide
The official ChatGPT connector doesn’t allow you to interact directly with Google Workspace apps from within the app. Let’s see how to overcome this limitation by creating a dedicated MCP server. This workflow acts as a Middleware Control Point (MCP) between Google Workspace services — including Gmail, Drive, Docs, Sheets, Calendar, and Slides — and AI agents like OpenAI’s Agent Builder and the ChatGPT App. It enables these AI assistants to directly interact with Google Workspace tools — from managing emails and calendars to creating and editing documents, spreadsheets, or presentations — through secure automation endpoints, a feature that is not natively supported in the ChatGPT app. Key Advantages ✅ Unified AI–Google Workspace Integration Allows large language models (LLMs) to manage Gmail, Drive, Docs, Sheets, Calendar, and Slides directly, enabling AI-driven workflows like email automation, document creation, meeting scheduling, and data analysis. ✅ Full Control Across Google Apps Supports key actions across multiple services: Gmail: Read, send, reply, search, and draft emails. Drive: Search, upload, organize, and share files. Docs: Create, edit, and retrieve Google Docs. Sheets: Create or update spreadsheets, analyze data, and read cell values. Calendar: List, create, update, or delete events. Slides: Generate or modify presentations. ✅ Plug-and-Play with OpenAI Agent Builder & ChatGPT Easily connect to MCP-compatible AI platforms like Claude Desktop or OpenAI Agent Builder, with minimal configuration. ✅ Scalable and Extensible The modular structure allows you to expand to additional Google APIs or custom automations (e.g., CRM syncing, sentiment analysis, or reporting). ✅ No-Code/Low-Code Configuration Fully built in n8n, allowing easy customization and maintenance without deep programming skills. How It Works MCP Trigger: The “MCP Google Workspace Trigger” node acts as the server endpoint, waiting for incoming requests from an AI application. Tool Execution: When the AI needs to interact with a Google app (e.g., Gmail or Drive), it sends a command to this trigger. The workflow routes the request to the appropriate tool node. Available Actions (Examples): Gmail: Get, send, reply, search, or draft messages. Drive: Upload or retrieve files. Docs: Create or edit documents. Sheets: Read or update cell data. Calendar: Manage events. Slides: Generate or modify presentations. Data Return: The result (email content, document link, file metadata, event details, etc.) is returned to the MCP server and then to the AI, which can use it to continue the workflow or conversation. Setup Steps Configure Google Workspace Credentials in n8n: Authenticate each Google service (Gmail, Drive, Docs, Sheets, Calendar, Slides) via OAuth2 using the correct account credentials. Activate the Workflow: The workflow must be active in n8n. The MCP Trigger node provides a unique URL that serves as the server endpoint. Connect to an AI Application (Choose one method): ChatGPT App: Open the ChatGPT App (Plus plan required). Enable Dev Mode → Add new connector. Add an “MCP Server” as a tool and provide the URL from the “MCP Google Workspace Trigger” node. OpenAI Agent Builder: Visit OpenAI Agent Builder. Create a new workflow or agent. Add an “MCP Server” as a tool and provide the MCP URL from n8n. Once connected, the AI can intelligently manage Google Workspace tasks based on natural language requests such as: > “Schedule a meeting with Sarah tomorrow at 3 PM,” > “Create a new Google Doc titled ‘Marketing Plan’ and share it with the team,” > or “Find the latest report in Drive and summarize it.” Need help customizing? Contact me for consulting and support or add me on Linkedin.
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,