by Tom
This workflow shows a low code approach to creating a HTML table based on Google Sheets data. It's similar to this workflow, but allows fully customizing the HTML output. To run the workflow: Make sure you have a Google Sheet with a header row and some data in it. Grab your sheet ID: Add it to the Google Sheets node: Activate the workflow or execute it manually Visit the URL provided by the webhook node in your browser (production URL if the workflow is active, test URL if the workflow is executed manually)
by Tom
This simple workflow demonstrates how to get an end user's browser to download a file. It makes use of the Content-Disposition header to set a filename and control the browser behaviour. A use case could be the download of a PDF file at the end of an application process or to export data from a database without replacing the current page content in the browser. With this approach, the current page remains open and the file is simply downloaded instead: The original idea was first present here by @dickhoning in the n8n community.
by Miquel Colomer
This workflow allows extracting data from multiple pages website. The workflow: 1) Starts in a country list at https://www.theswiftcodes.com/browse-by-country/. 2) Loads every country page (https://www.theswiftcodes.com/albania/) 3) Paginates every page in the country page. 4) Extracts data from the country page. 5) Saves data to MongoDB. 6) Paginates through all pages in all countries. It uses getWorkflowStaticData('global') method to recover the next page (saved from the previous page), and it goes ahead with all the pages. There is a first section where the countries list is recovered and extracted. Later, I try to read if a local cache page is available and I recover the cached page from the disk. Finally, I save data to MongoDB, and we paginate all the pages in the country and for all the countries. I have applied a cache system to save a visited page to n8n local disk. If I relaunch workflow, we check if a cache file exists to discard non-required requests to the webpage. If the data present in the website changes, you can apply a Cron node to check the website once per week. Finally, before inserting data in MongoDB, the best way to avoid duplicates is to check that swift_code (the primary value of the collection) doesn't exist. I recommend using a proxy for all requests to avoid IP blocks. A good solution for proxy plus IP rotation is scrapoxy.io. This workflow is perfect for small data requirements. If you need to scrape dynamic data, you can use a Headless browser or any other service. If you want to scrape huge lists of URIs, I recommend using Scrapy + Scrapoxy.
by Eduard
โ๏ธ๐ ๏ธ๐๐ค๐ฆพ This template is a PoC of a ReAct AI Agent capable of fetching random pages (not only Wikipedia or Google search results). On the top part there's a manual chat node connected to a LangChain ReAct Agent. The agent has access to a workflow tool for getting page content. The page content extraction starts with converting query parameters into a JSON object. There are 3 pre-defined parameters: url** โ an address of the page to fetch method** = full / simplified maxlimit** - maximum length for the final page. For longer pages an error message is returned back to the agent Page content fetching is a multistep process: An HTTP Request mode tries to get the page content. If the page content was successfuly retrieved, a series of post-processing begin: Extract HTML BODY; content Remove all unnecessary tags to recuce the page size Further eliminate external URLs and IMG scr values (based on the method query parameter) Remaining HTML is converted to Markdown, thus recuding the page lengh even more while preserving the basic page structure The remaining content is sent back to an Agent if it's not too long (maxlimit = 70000 by default, see CONFIG node). NB: You can isolate the HTTP Request part into a separate workflow. Check the Workflow Tool description, it guides the agent to provide a query string with several parameters instead of a JSON object. Please reach out to Eduard is you need further assistance with you n8n workflows and automations! Note that to use this template, you need to be on n8n version 1.19.4 or later.
by siyad
Workflow Description: This workflow automates the synchronization of product data from a Shopify store to a Google Sheets document, ensuring seamless management and tracking. It retrieves product details such as title, tags, description, and price from Shopify via GraphQL queries. The outcome is a comprehensive list of products neatly organized in Google Sheets for easy access and analysis. Key Features: Automated: Runs on a schedule you define (e.g., daily, hourly) to keep your product data fresh. Complete Product Details: Retrieves titles, descriptions, variants, images, inventory, and more. Cursor-Based Pagination: Efficiently handles large product sets by navigating pages without starting from scratch. Google Sheets Integration: Writes product data directly to your designated sheets. Set up Instructions: Set up GraphQL node with Header Authentication for Shopify: Create Google Sheet Credentials: Follow this guide to set up your Google Sheet credentials for n8n: https://docs.n8n.io/integrations/builtin/credentials/google/ Choose your Google Sheet: Select the sheet where you want product information written. For the setup, we need a document with two sheets: 1. for storing Shopify data 2. for storing cursor details. Google sheet template : https://docs.google.com/spreadsheets/d/1I6JnP8ugqmMD5ktJlNB84J1MlSkoCHhAEuCofSa3OSM Schedule and run: Decide how often you want the data refreshed (daily, hourly, etc.) and let n8n do its magic!
by Eduard
๐ Supercharge Your Website Indexing with This Powerful n8n Workflow! ๐ Google page indexing too slow? Tired of manually clicking through each page in the Google Search Console? ๐ด Say goodbye to that tedious process and hello to automation with this n8n workflow! ๐ **NB: this workflow was tested with sitemap.xml generated by Ghost CMS and WordPress. Reach out to Eduard if you need help adapting this workflow to your specific use-case!** โ๏ธ How this automation works ๐ The workflow runs on a schedule or when you click "Test workflow". ๐ It fetches the website's primary sitemap.xml and extracts all the content-specific sitemaps (this is a typical structure of the sitemap). ๐ Each content-specific sitemap is then parsed to retrieve the individual page data. ๐ The extracted page data is converted to JSON format for easy manipulation. ๐๏ธ The lastmod (last modified date) and loc (page URL) fields are assigned to each page entry to ensure compliance with the Sitemap protocol. ๐ The page entries are sorted by the lastmod field in descending order (newest to oldest). ๐ The workflow then loops over each page entry and performs the following steps: ๐ Checks the URL metadata in the Google Indexing API. โ If the page is new or has been updated since the last indexing request, it sends a request to the Google Indexing API to update the URL. โณ Wait a sec and move on with the next page. ๐ Benefits โฐ Save time by automating the indexing process. ๐ฏ Ensure all your website pages are consistently indexed by Google. ๐ Improve your website's visibility and search engine rankings. ๐ ๏ธ Customize the workflow to fit your specific CMS and requirements. ๐ง Getting started To start using this powerful n8n workflow, follow these steps: โ๏ธ Make sure to verify the website ownership in the Google Search Console. ๐จโ๐ป Import the workflow JSON into your n8n instance. Edit the Get sitemap.xml node and update the URL with your website's valid sitemap.xml ๐ Set up the necessary credentials for the Google Indexing API. ๐๏ธ Adjust the schedule trigger to run the workflow at your desired frequency. ๐ Sit back and let the workflow handle the indexing process for you! Ready to take your website indexing to the next level? ๐ Try this workflow now and see the difference it makes! ๐ โ ๏ธ IMPORTANT NOTE 1 Need help with connecting Google Cloud Platform to n8n? Check out our article on connecting Google Sheets to n8n. The process is mainly the same. When activating Google APIs, make sure to add Web Search Indexing API. Also, in the credential page of n8n, add the https://www.googleapis.com/auth/indexing scope: Check out Yulia's page for more n8n workflows! โ ๏ธ IMPORTANT NOTE 2 Free Google Cloud Platform account allows (re)indexing only 200 pages per day. If your website has more, then the workflow will automatically fail on quota limit โ. Next day it will skip the previously added items and continue with remaining pages. Example:* Assuming you have a free Google account, 500 pages on your website and they don't change for 3 days: On the first day 200 pages will be added for indexing and the workflow will fail due to quota limits. On the second day, the workflow will check 200 pages again and skip them (because the date of re-indexing is later then the page last modified date). The next 200 pages will be added to indexing. Workflow will fail again due to quota limits. On the third day 400 pages will be checked and skipped, the last 100 pages will be added for indexing and the workflow finishes successfully.
by ConvertAPI
Who is this for? For developers and organizations that need to convert image files to PDF. What problem is this workflow solving? The file format conversion problem. What this workflow does Downloads the JPG file from the web. Converts the JPG file to PDF. Stores the PDF file in the local file system. How to customize this workflow to your needs Open the HTTP Request node. Adjust the URL parameter (all endpoints can be found here). Add your secret to the Query Auth account parameter. Please create a ConvertAPI account to get an authentication secret. Optionally, additional Body Parameters can be added for the converter.
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically tracks local search trends and geographic-specific search patterns to optimize local SEO and marketing strategies. It saves you time by eliminating the need to manually research local search behavior and provides location-based insights for targeted marketing campaigns. Overview This workflow automatically scrapes local search results, geographic search trends, and location-based query data to understand regional search behavior and local market opportunities. It uses Bright Data to access location-specific search data and AI to intelligently analyze local trends and optimization opportunities. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping location-based search data without being blocked OpenAI**: AI agent for intelligent local search trend analysis Google Sheets**: For storing local search trend data and geographic insights How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your local trends tracking spreadsheet Customize: Define target locations and local search monitoring parameters Use Cases Local SEO**: Optimize for location-specific search queries and trends Regional Marketing**: Tailor campaigns to local search behavior and preferences Multi-location Businesses**: Track search trends across different geographic markets Market Expansion**: Identify new geographic opportunities based on search trends Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #localsearch #localseo #searchtrends #brightdata #webscraping #geographictrends #n8nworkflow #workflow #nocode #localmarketing #regionalseo #locationbased #localbusiness #searchgeography #localtrends #geoseo #localdata #regionalmarketing #localanalytics #geographicseo #localsearchdata #localoptimization #regionalsearch #locationmarketing #localsearchtrends #geomarketing #localinsights #regionalsearch
by Khairul Muhtadin
Automate Outreach Prospect automates finding, enriching, and messaging potential partners (like restaurants, malls, and bars) using Apify Google Maps scraping, Perplexity enrichment, OpenAI LLMs, Google Sheets, Pinecone knowledge, and WhatsApp sending via GOWA. It turns a manual, slow outreach funnel into a repeatable pipeline so your team spends time closing deals instead of copy-pasting contact details. โ ๏ธ Important Disclaimer This workflow uses community nodes for WhatsApp functionality: GOWA WhatsApp HTTP API ๐ก Why Use Automate Outreach Prospect? Faster prospecting:** Scrape up to 150 leads per search (jumlah leads = 150) and queue them for outreach in minutes, cutting manual research time from days to hours. Fixes the busywork:** Automatically enrich missing contact data and only send messages to records with phone numbers, so you stop chasing dead leads. Measurable lift:** Enrich in batches (enrichment batch size = 20), improving outbound readiness and increasing contactable leads per campaign by dozens each run. Better conversions with context:** Use a searchable company knowledge base (Pinecone + LlamaIndex) so replies are handled with context โ less robotic, more relevant. Yes, your bot can sound like a helpful human (minus the coffee breath). โก Perfect For Sales Ops:** Teams that need to scale partner outreach without hiring a mini-empire of SDRs. Growth Marketers:** People who want repeatable local outreach campaigns (mall, restaurant, bar categories). Small Biz Owners:** Quick way to build partnership lists and automate first outreach without becoming a spreadsheet hermit. ๐ง How It Works โฑ Trigger Manual scrape start or scheduled jobs: Daily Outbound Schedule, Schedule Outbound message, or Knowledge Base Updated Trigger. ๐ Process Apify Google Maps Scraper gathers business listings (location, phone, socials). Results are fetched and saved to Google Sheets (Raw Data). Unenriched records are split and enriched via Perplexity, then saved back. ๐ค Smart Logic OpenAI LLM creates personalized initial messages, and a Reply Handler AI Agent, uses Pinecone/knowledge embeddings to interpret replies and decide next actions (save PICs, request meeting, send proposal). ๐ Output Outbound messages are sent over WhatsApp using GOWA nodes (typing indicators, simulated typing delays, read receipts) and replies are handled & stored; qualified PIC contacts are appended to a Leads sheet. ๐ Storage Google Sheets is the central datastore (Raw Data, Leads Collected). Knowledge base lives in Google Drive and Pinecone (n8n-recharge, namespace CompanyKnowledgeBased). Conversation memory stored in Postgres/Neon. ๐ Quick Setup Import Workflow: Import JSON file to your n8n instances Add Credentials: Google Sheets OAuth2 Google Drive OAuth2 Apify API token OpenAI API Perplexity API Pinecone API Cohere API LlamaIndex Cloud key GOWA (WhatsApp) credentials WAHA webhook (optional) PostgreSQL/Neon Customize Parameters: Scraping parameters (Location Category, lokasi, jumlah leads, minimum Stars, Skip Closed Place) Message templates/time greetings Enrichment batch size Update Configuration: Google Drive doc ID Google Sheets ID Apify actor config Pinecone index name Pinecone namespaces LlamaIndex endpoints (if used) Test Setup: Run a manual scrape with a real location and send a single outbound message to verify WhatsApp delivery and reply handling. ๐งฉ Required Services Active n8n instances Google Sheets & Google Drive accounts (OAuth2) Apify account & actor access (Google Maps Scraper) OpenAI API key (for LLMs & embeddings) Perplexity API key (enrichment) Pinecone account (vector index n8n-recharge) Cohere API (reranker, optional) LlamaIndex Cloud (optional document parsing) GOWA / WA WhatsApp setup (or WAHA alternative) PostgreSQL/Neon for conversation memory ๐ง Workflow Nodes Triggers & Scheduling Incoming message Manual Trigger - Start Scraping Daily Outbound Schedule Schedule Outbound message Knowledge Base Updated Trigger Data Collection & Processing Configure Scraping Parameters Execute Google Maps Scraper Fetch Scraped Business Data Save Raw Business Leads Get Unenriched Records Limit Enrichment Batch Size Split Records for Processing Data Enrichment Business Data Enrichment Parse Enrichment Response Save Enriched Business Data Outbound Messaging Get Outbound Candidates Limit Outbound Batch Size Validate Phone Number Exists Prepare Outbound Session Data Outbound Message Generator Outbound Message LLM Format Outbound Message Data WhatsApp Communication Show Typing Indicator - Outbound Simulate Typing Delay - Outbound Send Outbound WhatsApp Message Mark as Contacted Extract WhatsApp Session Data Reply Handling Reply Handler AI Agent Reply Handler LLM Format Reply Message Data Show Typing Indicator - Reply Simulate Typing Delay - Reply Send WhatsApp Reply Save Lead Contact Information Knowledge Management Store Knowledge Embeddings Query Knowledge Base Reply Conversation Memory Outbound Conversation Memory Made by: Khaisa Studio Need custom work? Contact Me
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically monitors competitor pricing changes and website updates to keep you informed of market movements. It saves you time by eliminating the need to manually check competitor websites and provides alerts only when actual changes occur, preventing information overload. Overview This workflow automatically scrapes competitor pricing pages (like ClickUp) and compares current pricing with previously stored data. It uses Bright Data to access competitor websites without being blocked and AI to intelligently extract pricing information, updating your tracking spreadsheet only when changes are detected. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping competitor websites without being blocked OpenAI**: AI agent for intelligent pricing data extraction and parsing Google Sheets**: For storing and comparing historical pricing data How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your pricing tracking spreadsheet Customize: Set your competitor URLs and pricing monitoring schedule Use Cases Product Teams**: Monitor competitor feature and pricing changes for strategic planning Sales Teams**: Stay informed of competitor pricing to adjust sales strategies Marketing Teams**: Track competitor messaging and positioning changes Business Intelligence**: Build comprehensive competitor analysis databases Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #competitoranalysis #pricingmonitoring #brightdata #webscraping #competitortracking #marketintelligence #n8nworkflow #workflow #nocode #pricetracking #businessintelligence #competitiveanalysis #marketresearch #competitormonitoring #pricingdata #websitemonitoring #competitorpricing #marketanalysis #competitorwatch #pricingalerts #businessautomation #competitorinsights #markettrends #pricingchanges #competitorupdates #strategicanalysis #marketposition #competitiveintelligence
by Angel Menendez
Introducing the Qualys Scan Slack Report Subworkflowโa robust solution designed to automate the generation and retrieval of security reports from the Qualys API. This workflow is a sub workflow of the Qualys Slack Shortcut Bot workflow. It is triggered when someone fills out the modal popup in slack generated by the Qualys Slack Shortcut Bot. When deploying this workflow, use the Demo Data node to simulate the data that is input via the Execute Workflow Trigger. That data flows into the Global Variables Node which is then referenced by the rest of the workflow. It includes nodes to Fetch the Report IDs and then Launch a report, and then check the report status periodically and download the completed report, which is then posted to Slack for easy access. For Security Operations Centers (SOCs), this workflow provides significant benefits by automating tedious tasks, ensuring timely updates, and facilitating efficient data handling. How It Works Fetch Report Templates:** The "Fetch Report IDs" node retrieves a list of available report templates from Qualys. This automated retrieval saves time and ensures that the latest templates are used, enhancing the accuracy and relevance of reports. Convert XML to JSON:** The response is converted to JSON format for easier manipulation. This step simplifies data handling, making it easier for SOC analysts to work with the data and integrate it into other tools or processes. Launch Report:** A POST request is sent to Qualys to initiate report generation using specified parameters like template ID and report title. Automating this step ensures consistency and reduces the chance of human error, improving the reliability of the reports generated. Loop and Check Status:** The workflow loops every minute to check if the report generation is complete. Continuous monitoring automates the waiting process, freeing up SOC analysts to focus on higher-priority tasks while ensuring they are promptly notified when reports are ready. Download Report:** Once the report is ready, it is downloaded from Qualys. Automated downloading ensures that the latest data is always available without manual intervention, improving efficiency. Post to Slack:** The final report is posted to a designated Slack channel for quick access. This integration with Slack ensures that the team can promptly access and review the reports, facilitating swift action and decision-making. Get Started Ensure your Slack and Qualys integrations are properly set up. Customize the workflow to fit your specific reporting needs. Link to parent workflow Link to Vulnerability Scan Trigger Need Help? Join the discussion on our Forum or check out resources on Discord! Deploy this workflow to streamline your security report generation process, improve response times, and enhance the efficiency of your security operations.
by ConvertAPI
Who is this for? For developers and organizations that need to convert XLSX files to PDF. What problem is this workflow solving? The file format conversion problem. What this workflow does Downloads the XLSX file from the web. Converts the XLSX file to PDF. Stores the PDF file in the local file system. How to customize this workflow to your needs Open the HTTP Request node. Adjust the URL parameter (all endpoints can be found here). Add your secret to the Query Auth account parameter. Please create a ConvertAPI account to get an authentication secret. Optionally, additional Body Parameters can be added for the converter.