by Eduard
🚀 Supercharge Your Website Indexing with This Powerful n8n Workflow! 🌐 Google page indexing too slow? Tired of manually clicking through each page in the Google Search Console? 😴 Say goodbye to that tedious process and hello to automation with this n8n workflow! 🎉 **NB: this workflow was tested with sitemap.xml generated by Ghost CMS and WordPress. Reach out to Eduard if you need help adapting this workflow to your specific use-case!** ⚙️ How this automation works 📅 The workflow runs on a schedule or when you click "Test workflow". 🌐 It fetches the website's primary sitemap.xml and extracts all the content-specific sitemaps (this is a typical structure of the sitemap). 📑 Each content-specific sitemap is then parsed to retrieve the individual page data. 🔄 The extracted page data is converted to JSON format for easy manipulation. 🗃️ The lastmod (last modified date) and loc (page URL) fields are assigned to each page entry to ensure compliance with the Sitemap protocol. 🔀 The page entries are sorted by the lastmod field in descending order (newest to oldest). 🔁 The workflow then loops over each page entry and performs the following steps: 🔍 Checks the URL metadata in the Google Indexing API. ✅ If the page is new or has been updated since the last indexing request, it sends a request to the Google Indexing API to update the URL. ⏳ Wait a sec and move on with the next page. 🌟 Benefits ⏰ Save time by automating the indexing process. 🎯 Ensure all your website pages are consistently indexed by Google. 🚀 Improve your website's visibility and search engine rankings. 🛠️ Customize the workflow to fit your specific CMS and requirements. 🔧 Getting started To start using this powerful n8n workflow, follow these steps: ☑️ Make sure to verify the website ownership in the Google Search Console. 👨💻 Import the workflow JSON into your n8n instance. Edit the Get sitemap.xml node and update the URL with your website's valid sitemap.xml 🔑 Set up the necessary credentials for the Google Indexing API. 🎚️ Adjust the schedule trigger to run the workflow at your desired frequency. 🎉 Sit back and let the workflow handle the indexing process for you! Ready to take your website indexing to the next level? 🚀 Try this workflow now and see the difference it makes! 😊 ⚠️ IMPORTANT NOTE 1 Need help with connecting Google Cloud Platform to n8n? Check out our article on connecting Google Sheets to n8n. The process is mainly the same. When activating Google APIs, make sure to add Web Search Indexing API. Also, in the credential page of n8n, add the https://www.googleapis.com/auth/indexing scope: Check out Yulia's page for more n8n workflows! ⚠️ IMPORTANT NOTE 2 Free Google Cloud Platform account allows (re)indexing only 200 pages per day. If your website has more, then the workflow will automatically fail on quota limit ⛔. Next day it will skip the previously added items and continue with remaining pages. Example:* Assuming you have a free Google account, 500 pages on your website and they don't change for 3 days: On the first day 200 pages will be added for indexing and the workflow will fail due to quota limits. On the second day, the workflow will check 200 pages again and skip them (because the date of re-indexing is later then the page last modified date). The next 200 pages will be added to indexing. Workflow will fail again due to quota limits. On the third day 400 pages will be checked and skipped, the last 100 pages will be added for indexing and the workflow finishes successfully.
by n8n Team
This n8n workflow automates the monitoring and notification of Palo Alto Networks security advisories. It is triggered manually from within the n8n UI or scheduled to run daily at midnight using the Schedule Trigger. The workflow begins by fetching the latest security advisories from Palo Alto Networks' RSS feed. Each advisory is then processed, and relevant information is extracted and categorized, including the advisory type, subject, and severity. The workflow checks the publication date of each advisory to ensure that it was posted within the last 24 hours, filtering out older advisories. The workflow then splits into two paths based on the advisory type: GlobalProtect and Traps. In the GlobalProtect path, advisories related to GlobalProtect are identified and used to create Jira issues. The Jira issues include a summary with the advisory title and a description that provides details about the advisory, its severity, link, and publication date. In the Traps path, advisories related to Traps are recognized, and dummy data (which should be replaced with logic to retrieve valid user emails) is generated for sample purposes. These email addresses are then used to send email notifications using the Gmail node. Each email's subject includes the type of advisory, while the body contains the advisory title and a link for more information. Potential issues when setting up this workflow for the first time might involve configuring the Schedule Trigger to match the desired time zone. Additionally, ensuring that the Jira and Gmail nodes are configured correctly with the required credentials and email addresses is crucial. The placeholder for generating dummy data for email recipients should be replaced with logic to retrieve valid user emails. Proper error handling and testing with real and sample advisories can help identify and resolve any potential issues during setup.
by ConvertAPI
Who is this for? For developers and organizations that need to convert image files to PDF. What problem is this workflow solving? The file format conversion problem. What this workflow does Downloads the JPG file from the web. Converts the JPG file to PDF. Stores the PDF file in the local file system. How to customize this workflow to your needs Open the HTTP Request node. Adjust the URL parameter (all endpoints can be found here). Add your secret to the Query Auth account parameter. Please create a ConvertAPI account to get an authentication secret. Optionally, additional Body Parameters can be added for the converter.
by Angel Menendez
Introducing the Qualys Scan Slack Report Subworkflow—a robust solution designed to automate the generation and retrieval of security reports from the Qualys API. This workflow is a sub workflow of the Qualys Slack Shortcut Bot workflow. It is triggered when someone fills out the modal popup in slack generated by the Qualys Slack Shortcut Bot. When deploying this workflow, use the Demo Data node to simulate the data that is input via the Execute Workflow Trigger. That data flows into the Global Variables Node which is then referenced by the rest of the workflow. It includes nodes to Fetch the Report IDs and then Launch a report, and then check the report status periodically and download the completed report, which is then posted to Slack for easy access. For Security Operations Centers (SOCs), this workflow provides significant benefits by automating tedious tasks, ensuring timely updates, and facilitating efficient data handling. How It Works Fetch Report Templates:** The "Fetch Report IDs" node retrieves a list of available report templates from Qualys. This automated retrieval saves time and ensures that the latest templates are used, enhancing the accuracy and relevance of reports. Convert XML to JSON:** The response is converted to JSON format for easier manipulation. This step simplifies data handling, making it easier for SOC analysts to work with the data and integrate it into other tools or processes. Launch Report:** A POST request is sent to Qualys to initiate report generation using specified parameters like template ID and report title. Automating this step ensures consistency and reduces the chance of human error, improving the reliability of the reports generated. Loop and Check Status:** The workflow loops every minute to check if the report generation is complete. Continuous monitoring automates the waiting process, freeing up SOC analysts to focus on higher-priority tasks while ensuring they are promptly notified when reports are ready. Download Report:** Once the report is ready, it is downloaded from Qualys. Automated downloading ensures that the latest data is always available without manual intervention, improving efficiency. Post to Slack:** The final report is posted to a designated Slack channel for quick access. This integration with Slack ensures that the team can promptly access and review the reports, facilitating swift action and decision-making. Get Started Ensure your Slack and Qualys integrations are properly set up. Customize the workflow to fit your specific reporting needs. Link to parent workflow Link to Vulnerability Scan Trigger Need Help? Join the discussion on our Forum or check out resources on Discord! Deploy this workflow to streamline your security report generation process, improve response times, and enhance the efficiency of your security operations.
by ConvertAPI
Who is this for? For developers and organizations that need to convert XLSX files to PDF. What problem is this workflow solving? The file format conversion problem. What this workflow does Downloads the XLSX file from the web. Converts the XLSX file to PDF. Stores the PDF file in the local file system. How to customize this workflow to your needs Open the HTTP Request node. Adjust the URL parameter (all endpoints can be found here). Add your secret to the Query Auth account parameter. Please create a ConvertAPI account to get an authentication secret. Optionally, additional Body Parameters can be added for the converter.
by Harshil Agrawal
This workflow allows you to receive updates about the positiong of the ISS and add it to a table in TimescaleDB. Cron node: The Cron node triggers the workflow every minute. You can configure the time based on your use-case. HTTP Request node: This node makes an HTTP Request to an API that returns the position of the ISS. Based on your use-case you may want to fetch data from a different URL. Enter the URL in the URL field. Set node: In the Set node we set the information that we need in the workflow. Since we only need the timestamp, latitude, and longitude we set this in the node. If you need other information, you can set them in this node. TimescaleDB node: This node stores the information in a table named iss. You can use a different table as well.
by ConvertAPI
Who is this for? For developers and organizations that need to convert web page to PDF. What problem is this workflow solving? The web page conversion to PDF problem. What this workflow does Converts web page to PDF. Stores the PDF file in the local file system. How to customize this workflow to your needs Open the HTTP Request node. Adjust the URL parameter (all endpoints can be found here). Add your secret to the Query Auth account parameter. Please create a ConvertAPI account to get an authentication secret. Change the parameter url to the webpage you want to convert to pdf Optionally, additional Body Parameters can be added for the converter.
by Miquel Colomer
Do you want to avoid communication problems when launching phone calls? This workflow verifies landline and mobile phone numbers using the uProc Get Parsed and validated phone tool with worldwide coverage. You need to add your credentials (Email and API Key - real -) located at Integration section to n8n. Node "Create Phone Item" can be replaced by any other supported service with phone values, like databases (MySQL, Postgres), or Typeform. The "uProc" node returns the next fields per every parsed and validated phone number: country_prefix: contains the international country phone prefix number. country_code: contains the 2-digit ISO country code of the phone number. local_number: contains the phone number without international prefix. formatted: contains a formatted version of the phone number, according to country detected. valid: detects if the phone number has a valid format and prefix. type: the phone number type (mobile, landline, or something else). "If" node checks if the phone number is valid. You can use the result to mark invalid phone numbers in your database or discard them from future telemarketing campaigns.
by Jenny
Vector Database as a Big Data Analysis Tool for AI Agents Workflows from the webinar "Build production-ready AI Agents with Qdrant and n8n". This series of workflows shows how to build big data analysis tools for production-ready AI agents with the help of vector databases. These pipelines are adaptable to any dataset of images, hence, many production use cases. Uploading (image) datasets to Qdrant Set up meta-variables for anomaly detection in Qdrant Anomaly detection tool KNN classifier tool For anomaly detection 1. This is the first pipeline to upload an image dataset to Qdrant. The second pipeline is to set up cluster (class) centres & cluster (class) threshold scores needed for anomaly detection. The third is the anomaly detection tool, which takes any image as input and uses all preparatory work done with Qdrant to detect if it's an anomaly to the uploaded dataset. For KNN (k nearest neighbours) classification 1. This is the first pipeline to upload an image dataset to Qdrant. The second is the KNN classifier tool, which takes any image as input and classifies it on the uploaded to Qdrant dataset. To recreate both You'll have to upload crops and lands datasets from Kaggle to your own Google Storage bucket, and re-create APIs/connections to Qdrant Cloud (you can use Free Tier cluster), Voyage AI API & Google Cloud Storage. [This workflow] Batch Uploading Images Dataset to Qdrant This template imports dataset images from Google Could Storage, creates Voyage AI embeddings for them in batches, and uploads them to Qdrant, also in batches. In this particular template, we work with crops dataset. However, it's analogous to uploading lands dataset, and in general, it's adaptable to any dataset consisting of image URLs (as the following pipelines are). First, check for an existing Qdrant collection to use; otherwise, create it here. Additionally, when creating the collection, we'll create a payload index, which is required for a particular type of Qdrant requests we will use later. Next, import all (dataset) images from Google Cloud Storage but keep only non-tomato-related ones (for anomaly detection testing). Create (per batch) embeddings for all imported images using the Voyage AI multimodal embeddings API. Finally, upload the resulting embeddings and image descriptors to Qdrant via batch upload.
by darrell_tw
This workflow automates the process of fetching agricultural transaction data from the Taiwan Agricultural Products Open Data Platform and storing it in a Google Sheets document for further analysis. Key Features Manual Trigger: Allows manual execution of the workflow to control when data is fetched. HTTP Request: Sends a request to the Open Data Platform's API to retrieve detailed transaction data, including: Pricing (Upper, Middle, Lower, Average) Transaction quantities Crop and market details Split Out Node: Processes each record individually, ensuring accurate handling of every data entry. Google Sheets Integration: Appends the data into a structured Google Sheets document for easy access and analysis. Node Configurations 1. Manual Trigger Purpose**: Start the workflow manually. Configuration**: No setup needed. 2. HTTP Request Purpose**: Fetch agricultural data. Configuration**: URL: https://data.moa.gov.tw/api/v1/SheepQuotation Query Parameters: Start_time: 2024/12/01 End_time: 2024/12/31 MarketName: 台北二 api_key: <your_api_key> Headers: accept: application/json 3. Split Out Purpose**: Split the API response data array into individual items. Configuration**: Field to Split Out: Data 4. Google Sheets Purpose**: Append the data to Google Sheets. Configuration**: Operation: Append Document ID: <your_document_id> Sheet Name: Sheet1 Mapped Fields: TransDate, TcType, CropCode, CropName, MarketCode, MarketName Upper_Price, Middle_Price, Lower_Price, Avg_Price, Trans_Quantity 此 Workflow 從 台灣農業產品開放資料平臺 獲取農產品交易數據,並將其儲存到 Google Sheets 文件 中進行進一步分析。 主要功能 Manual Trigger:允許手動執行工作流程,以控制數據獲取的時間。 HTTP Request:向開放資料平臺的 API 發送請求,獲取詳細的交易數據,包括: 價格 (Upper, Middle, Lower, Average) 交易數量 作物和市場詳細資料 Split Out Node:逐筆處理每一筆記錄,確保數據準確無誤。 Google Sheets Integration:將數據追加到結構化的 Google Sheets 文件中,方便存取和分析。 節點設定 1. Manual Trigger 用途**:手動啟動工作流程。 設定**:無需額外設定。 2. HTTP Request 用途**:抓取農產品數據。 設定**: URL: https://data.moa.gov.tw/api/v1/SheepQuotation 查詢參數 (Query Parameters): Start_time: 2024/12/01 End_time: 2024/12/31 MarketName: 台北二 api_key: <your_api_key> 標頭 (Headers): accept: application/json 3. Split Out 用途**:將 API 回應的數據陣列分解為個別項目。 設定**: Field to Split Out: Data 4. Google Sheets 用途**:將數據追加至 Google Sheets。 設定**: Operation:Append Document ID:<your_document_id> Sheet Name:Sheet1 映射欄位 (Mapped Fields): TransDate, TcType, CropCode, CropName, MarketCode, MarketName Upper_Price, Middle_Price, Lower_Price, Avg_Price, Trans_Quantity 請多利用 Curl Import 功能 例如 curl -X GET "https://data.moa.gov.tw/api/v1/AgriProductsTransType/?Start_time=114.01.01&End_time=114.01.01&MarketName=%E5%8F%B0%E5%8C%97%E4%BA%8C" -H "accept: application/json" 農業資料開放平台 文件
by Sascha
Having a seamless flow of customer data between your online store and your marketing platform is essential. By keeping your systems synchronized, you can ensure that your marketing campaigns are accurately targeted and effective. The integration between Shopify, a leading e-commerce platform, and Mautic, an open-source marketing automation system, is not available out-of-the-box. However, with a n8n workflow you can bridge this gap with. This template will help you: enhance accuracy in marketing lists by ensuring that subscription changes in Shopify are instantly updated in Mautic. improve compliance with data protection laws by respecting users' subscription preferences across platforms achieve integration without the need for additional plugins or software, minimizing complexity and potential points of failure. This template will demonstrate the follwing concepts in n8n: working with Shopify in n8n control flow with the IF node use Webhooks validate Webhooks with the Crypto node use the GraphQL node to call the Shopify Admin API The template consists of two parts: Sync Email Subscriptions from Shopify to Mautic Sync Email Subscriptions from Mautic to Shopify How to get started? Create a custom app in Shopify get the credentials needed to connect n8n to Shopify This is needed for the Shopify Trigger Create Shopify Acces Token API credentials n n8n for the Shopify trigger node Create Header Auth credentials: Use X-Shopify-Access-Token as the name and the Acces-Token from the Shopify App you created as the value. The Header Auth is neccessary for the GraphQL nodes. Enable the Mautic API under Configuration/API Settings, After the settings are saved you will have an additional entry in your settings menu to create API credentials for n8n Create Mautic credentials in n8n Please make sure to read the notes in the template. For a detailed explanation please check the corresponding video: https://youtu.be/x63rrh_yJzI
by David Ashby
What it is- I wanted to create a simple, easy-to-use, MCP server for your Discord bot(s). How to set up- Literally all you do is select your bot auth (or crease a new Discord Bot auth if you havn't entered your key in n8n before) and that's IT! How to use it- You can now ask your bot to do things via any MCP client, including from within N8N workflows! Note: If you need an example, you can check out my simple quickstart Discord MCP Server that uses 4o to send messages to channels on your server and users who are members of the server the bot is in. Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community