by Yang
What this workflow does This workflow extracts product details—like name, price, discount, and rating— from website screenshots using Dumpling AI. It starts when a new product page URL is added to a Google Sheet, captures a screenshot of that page, extracts visible product info from the image, and writes the results back into the sheet. What problem is this workflow solving? Many product pages block traditional scraping tools or use unstructured layouts. This workflow bypasses HTML limitations by using visual AI extraction, making it reliable even when content is embedded in images or hard to parse with code. Who is this for? This is ideal for eCommerce researchers, pricing analysts, marketers, or anyone building a product database from websites without needing to code or maintain complex scrapers. Setup Create a Google Sheet with a column named "Site" (or update the trigger). Add your product page URLs in this column—one per row. Connect your Google Sheets and Dumpling AI credentials in n8n. Ensure your Dumpling AI account has API access for screenshots and extraction. How to customize the workflow Prompt adjustment**: In the “Extract Text from Screenshot” node, you can modify the prompt to extract other information like brand name, delivery time, or availability. Add more fields**: After the extraction, edit the “Format Extracted Data” node to map additional fields from the response to your Google Sheet columns. Change output destination**: You can easily replace the Google Sheets module with Airtable, Notion, or another app if preferred. > ⚠️ This works best when the product data is clearly visible in the screenshot. > It won’t extract info that’s hidden behind popups or loaded via user interaction.
by Marth
⚙️ How it works Workflow starts from a manual trigger or form submission with project details. It extracts key input data like client name, email, project type, deadline, and brand folder (optional). A Google Drive folder is automatically created inside a designated parent folder. The shareable link of the newly created folder is generated. A personalized email is composed and sent to the client using Gmail, including project details and folder link. 🛠️ Set up steps Google Drive Setup: Connect your Google Drive credentials in n8n. Set the parent folder ID where all project folders should be created. Gmail Setup: Connect a Gmail account with proper access. Customize the subject and message template in the Gmail node. Input Data Preparation: Ensure the following input fields are provided: client_name contact_email project_type deadline brand_drive_folder (optional) Test & Deploy: Use mock data or a test trigger to validate the workflow. Once confirmed, deploy it with the actual trigger (e.g. webhook, form submission).
by RedOne
This workflow is designed for e-commerce store owners, operations managers, and developers who use Shopify as their e-commerce platform and want an automated way to track and analyze their order data. It is particularly useful for businesses that: Need a centralized view of all Shopify orders Want to analyze order trends without logging into Shopify Need to share order data with team members who don't have Shopify access Want to build custom reports based on order information What Problem Is This Workflow Solving? While Shopify provides excellent order management within its platform, many businesses need their order data available in other systems for various purposes: Data accessibility**: Not everyone in your organization may have access to Shopify's admin interface Custom reporting**: Google Sheets allows for flexible analysis and report creation Data integration**: Having orders in Google Sheets makes it easier to combine with other business data Backup**: Creates an additional backup of your critical order information What This Workflow Does This n8n workflow creates an automated bridge between your Shopify store and Google Sheets: Listens for new order notifications from your Shopify store via webhooks Processes the incoming order data and transforms it into a structured format Stores each new order in a dedicated Google Sheets spreadsheet Sends real-time notifications to Telegram when new orders are received or errors occur Setup Create a Google Sheet Create a new Google Sheet to store your orders Add a sheet named "orders" with the following columns: orderId orderNumber created_at processed processed_at json customer shippingAddress lineItems totalPrice currency Set Up Telegram Bot Create a Telegram bot using BotFather (send /newbot to @BotFather) Save your bot token for use in n8n credentials Start a chat with your bot and get your chat ID (you can use @userinfobot) Configure the Workflow Set your Google Sheet ID in the "Edit Variables" node Enter your Telegram chat ID in the "Edit Variables" node Set up your Telegram API credentials in n8n Configure Shopify Webhook In your Shopify admin, go to: Settings > Notifications > Webhooks Create a new webhook for "Order creation" Set the URL to your n8n webhook URL (from the "Receive New Shopify Order" node) Set the format to JSON How to Customize This Workflow to Your Needs Additional data**: Modify the "Transform Order Data to Standard Format" function to extract more Shopify data Multiple sheets**: Duplicate the Google Sheets node to store different aspects of orders in separate sheets Telegram messages**: Customize the text in Telegram nodes to include more details or rich formatting Data processing**: Add nodes to perform calculations or transformations on order data Additional notifications**: Add more channels like Slack, Discord, or SMS Integrations**: Extend the workflow to send order data to other systems like CRMs, ERPs, or accounting software Final Notes This workflow serves as a foundation that you can build upon to create a comprehensive order management system tailored to your specific business needs.
by n8n Team
This workflow creates a Jira issue when a new ticket is created in Zendesk. Subsequent comments on the ticket in Zendesk are added as comments to the issue in Jira. Prerequisites Zendesk account and Zendesk credentials. Jira account and Jira credentials. Jira project to create issues in. How it works The workflow listens for new tickets in Zendesk. When a new ticket is created, the workflow creates a new issue in Jira. The Jira issue key is then saved in one of the ticket's fields (in setup we call this "Jira Issue Key"). The next time a comment is added to the ticket, the workflow retrieves the Jira issue key from the ticket's field and adds the comment to the issue in Jira. Setup This workflow requires that you set up a webhook in Zendesk. To do so, follow the steps below: In the workflow, open the On new Zendesk ticket node and copy the webhook URL. In Zendesk, navigate to Admin Center > Apps and integrations > Webhooks > Actions > Create Webhook. Add all the required details which can be retrieved from the On new Zendesk ticket node. The webhook URL gets added to the “Endpoint URL” field, and the “Request method” should match what is shown in n8n. Save the webhook. In Zendesk, navigate to Admin Center > Objects and rules > Business rules > Triggers > Add trigger. Give the trigger a name such as “New tickets”. Under “Conditions” in “Meet ALL of the following conditions”, add “Status is New”. Under “Actions”, select “Notify active webhook” and select the webhook you created previously. In the JSON body, add the following: { "id": "{{ticket.id}}", "comment": "{{ticket.latest_comment_html}}" } Save the Zendesk trigger. You will also need to set up a field in Zendesk to store the Jira issue key. To do so, follow the steps below: In Zendesk, navigate to Admin Center > Objects and rules > Tickets > Fields > Add field. Use the text field option and give the field a name such as “Jira Issue Key". Save the field. In n8n, open the Update ticket node and select the field you created in Zendesk.
by bangank36
This workflow restores all n8n instance credentials from GitHub backups using the n8n API node. It complements the Backup Your Credentials to GitHub template by allowing users to seamlessly restore previously saved credentials. How It Works The workflow fetches credentials stored in a GitHub repository and imports them into your n8n instance. Setup Instructions To configure the workflow, update the Globals node with the following values: repo.owner** – Your GitHub username repo.name** – The name of your GitHub repository storing the credentials repo.path** – The folder path within the repository where credentials are stored For example, if your GitHub username is john-doe, your repository is named n8n-backups, and credentials are stored in a credentials/ folder, you would set: repo.owner → john-doe repo.name → n8n-backups repo.path → credentials/ Required Credentials GitHub API** – Access to your repository n8n API** – To import credentials into your n8n instance Who Is This For? This template is ideal for users who want to restore their credentials from GitHub backups, ensuring easy migration and recovery in case of data loss. Check out my other templates: 👉 My n8n Templates
by Niranjan G
Who is this for? NVD (National Vulnerability Database) data is essential for security analysts, vulnerability managers, and DevSecOps professionals who need to perform both CVE lookups and monitor historical change logs. This workflow helps streamline those efforts by providing structured outputs for audit, triage, or compliance tracking purposes. 📝 Note: While this example uses Google Sheets as the destination, you can easily modify the final destination node (e.g., send to Slack, email, database, etc.) based on your specific automation needs.? What problem is this solving? Security teams often manually look up CVE data and track changes across multiple tools. This process is inefficient and error-prone. This workflow automates the CVE lookup and historical change tracking by logging enriched vulnerability data into Google Sheets in real-time. What this workflow does This workflow is designed for CVE API lookup and change history tracking. In many vulnerability automation pipelines, it is essential to determine not only the metadata of a CVE but also how it has evolved over time. Based on the operational need—whether it's enrichment, risk scoring, or remediation validation—this workflow becomes particularly handy in surfacing both current and historical CVE data. This template performs the following actions: Accepts incoming webhook requests containing a CVE ID Queries the NVD CVE Lookup API to fetch vulnerability metadata Queries the NVD CVE History API to retrieve all historical changes Flattens both datasets into a sheet-compatible structure Appends vulnerability metadata to one sheet and change history to another within the same Google Spreadsheet Setup 🔑 Request an NVD API Key To request an NVD API Key, please provide your organization name, a valid email address, and indicate your organization type at NVD API Key Request. You must scroll to the end of the Terms of Use Agreement and check "I agree to the Terms of Use" to obtain an API Key. After submission, you will receive a single-use hyperlink via email to activate and view your API Key. If not activated within seven days, a new request must be submitted. 📊 API Rate Limits Without an API key, you're limited to 5 requests per 30-second window. With an API key, you’re allowed up to 50 requests in the same period. To prevent request throttling, it's recommended to introduce slight delays between consecutive API calls in production setups. Clone or import this workflow into your n8n instance. Set up the following credentials: Google Sheets OAuth2 NVD API Key (via HTTP Header Auth) The workflow logs data to a Google Sheet titled NVD Database, with Sheet 1 named CVE Lookup and Sheet 2 named CVE History. Trigger each workflow using the respective webhook URL, appending ?cveId=CVE-XXXX-XXXX as a query parameter. 🔍 Example Webhook Request (CVE Change History) You can test this workflow with the following example: GET https://your-domain.com/webhook/cve-history?cveId=CVE-2023-34362 How to customize this workflow Use the Edit Fields node (optional) to centralize configuration like sheet name or query input Extend the CVE flattening logic to include more nested metadata if needed Integrate notification systems (e.g., Slack or email) by branching from the processing nodes Modify webhook paths for better endpoint organization 🔐 Production Security Tips Use HTTP Header Auth on the webhook for secure access > ⚠️ This template uses webhooks and NVD API access with authentication headers. This template uses two flows: Webhook 1:** NVD CVE Lookup — Lookup CVE vulnerability metadata from NVD and sync to Google Sheet Webhook 2:** NVD CVE Change History — Track change history for CVEs via NVD and log each update Each flow: Hits NVD’s respective endpoint Uses custom JS Code node to flatten the nested JSON Syncs data to dedicated Google Sheet tabs 🧩 4 nodes: Webhook → API Call → Parse → Sheet Sync Make sure both flows are activated and webhooks exposed for external access. Based on your needs, ensure you have a secure setup—whether hosted internally or in a cloud environment—when running n8n in production.
by hani safaei
This template helps anyone track how often their website appears in Google’s AI Overview. a growing part of search results that can’t currently be tracked using traditional SEO tools. With this workflow, users can: Input a list of keywords (from Google Search Console or manual research). Use the SerpApi to pull Google search results. Extract AI Overview content and its list of sources. Map that information into a structured Google Sheet, including whether your site is listed in those sources. Setup is straightforward and fully automated, but you'll need: A SerpApi key A connected Google Sheets account Who is this for? This workflow is designed for SEO professionals, digital marketers, and site owners who want to track their website’s visibility in Google AI Overviews. What problem does it solve? AI Overviews are rapidly becoming more common in Google search results. However, there's no tool (yet) that tells you if your website is appearing in those answers. This is a blind spot for SEO. This workflow helps you check your site’s presence in AI Overviews manually, at scale. What does the workflow do? The workflow: Takes a list of target keywords (exported from GSC or elsewhere) Uses SerpApi to get search results from Google Extracts the AI Overview block and its sources Checks if your domain is among them Saves all results into a Google Sheet The final Google Sheet will contain: Keyword | AI Overview Exists | List of Sources | Is my domain listed Setup You’ll need: A SerpApi API key A Google Sheet with your list of keywords A connected Google Sheets account in n8n How to customize this workflow Change the list of keywords (pull from GSC or edit the sheet manually) Replace the placeholder domain with your own Adjust the Google Sheet column mapping as needed
by Gareth B. Davies
An automated backup solution designed for self-hosted n8n users to automatically backup their workflows to Bitbucket, leveraging Bitbucket's free private repository offering. Perfect for maintaining version control of your n8n workflows without additional costs. How it works: Runs on a regular schedule to check all workflows in your n8n instance Compares each workflow with its version in Bitbucket Only uploads workflows that are new or have changed Uses basic rate limiting to stay within Bitbucket's API limits Formats filenames for easy tracking and includes timestamps in commit messages Handles errors gracefully with automatic retries Set up steps (10-15 minutes): Create a free Bitbucket account and private repository Create a Bitbucket App Password with repository write access Add Bitbucket credentials to n8n (using your username and app password) Set up n8n API access (generate API key in your n8n instance) Configure your Bitbucket workspace and repository names in the Set node Optional: Adjust the backup schedule (default: 2 AM daily) Perfect for n8n self-hosters who want: Version control for their workflows Automated daily backups Free private repository storage Easy workflow recovery Change tracking over time The workflow includes basic error handling and rate limiting to ensure reliable backups even with larger numbers of workflows. Adjust your timing based on https://support.atlassian.com/bitbucket-cloud/docs/api-request-limits/.
by Matheus Pedrosa
Who is this template for? This template is ideal for n8n instance administrators, developers, and DevOps teams who need a proactive and organized way to monitor the health of their automations. If you want to be notified about failures as soon as they happen, without having to manually check execution logs, this workflow is for you. What does this template do? This workflow automates error monitoring on your n8n instance. Every hour, it performs the following steps: Queries the n8n API to fetch all executions that have failed in the last hour. Groups the errors by workflow to consolidate the information. Builds a rich message for each failed workflow, including the error count. Sends an alert to a Slack channel with a button to open the workflow directly, allowing for immediate investigation. Requirements Before you start, you will need to have the following configured in your n8n instance: n8n API Credentials:** You need to generate an API key in your n8n instance settings so the workflow can query execution data. Slack Credentials:* A configured *Slack (OAuth2 API)** credential to allow n8n to send messages to your workspace. How to set it up Setup is simple and only takes a few minutes: Config Node: In the node named "Config", you must set the value for the baseUrl to your n8n instance's URL (e.g., https://n8n.yourdomain.com). This is crucial for generating the correct workflow links in the Slack message. Schedule Trigger: The workflow is pre-configured to run every hour. You can adjust the frequency in this node to fit your needs. "Get Failed Executions" Node (HTTP Request): Under Authentication, select 'Header Auth'. In the Credentials field, select your n8n API credential. "Post to Slack" Node (Slack): Select your Slack credential. In the Channel field, enter the name of the channel where error notifications should be sent (e.g., #n8n-alerts). Activate the Workflow! After these steps, just activate the workflow to start the automatic error monitoring. How to customize the workflow You can easily customize this template: Change the Schedule:** Modify the Schedule Trigger node to run at different intervals (every 15 minutes, once a day, etc.). Change the Notification Channel:** Instead of Slack, you can replace the last node to send notifications to Discord, Microsoft Teams, Telegram, or even by email. Add More Information:** You can modify the MakeMessage node that generates the message to include more details about the errors, such as the error message or the exact time of failure.
by Rui Borges
his workflow automates time tracking using location-based triggers. How it works Trigger: It starts when you enter or exit a specified location, triggering a shortcut on your iPhone. Webhook: The shortcut sends a request to a webhook in n8n. Check-In/Check-Out: The webhook receives the request and records the time and whether it was a "Check-In" or "Check-Out" event. Google Sheets: This data is then logged into a Google Sheet, creating a record of your work hours. Set up steps Google Drive: Connect your Google Drive account. Google Sheets: Connect your Google Sheets account. Webhook: Set up a webhook node in n8n. iPhone Shortcuts: Create two shortcuts on your iPhone, one for "Check-In" and one for "Check-Out." Configure Shortcuts: Configure each shortcut to send a request to the webhook with the appropriate "Direction" header. It's easy to setup, around 5 minutes.
by Yaron Been
This cutting-edge n8n automation is a powerful market research tool designed to continuously monitor and capture User-Generated Content (UGC) opportunities on Fiverr. By intelligently scraping, parsing, and logging gig data, this workflow provides: Automated Market Scanning: Daily scrapes of Fiverr UGC gigs Real-time market intelligence Consistent, hands-off data collection Intelligent Data Extraction: Parses complex HTML structures Captures key gig details Transforms unstructured web data into actionable insights Seamless Data Logging: Automatic Google Sheets integration Comprehensive gig marketplace tracking Historical data preservation Key Benefits 🤖 Full Automation: Continuous market research 💡 Smart Filtering: Detailed UGC gig insights 📊 Instant Reporting: Real-time market trends ⏱️ Time-Saving: Eliminate manual research Workflow Architecture 🔍 Stage 1: Automated Triggering Scheduled Scraping**: Daily gig discovery Precise Timing**: Configurable run intervals Consistent Monitoring**: Always-on market intelligence 🌐 Stage 2: Web Scraping HTTP Request**: Fetch Fiverr search results Dynamic Headers**: Bypass potential scraping restrictions Targeted Search**: UGC-specific gig discovery 🧩 Stage 3: Data Extraction HTML Parsing**: Extract critical gig information Structured Data Collection**: Gig Prices Seller Names Gig Titles Direct Gig URLs 📋 Stage 4: Data Logging Google Sheets Integration**: Automatic data storage Historical Tracking**: Build comprehensive gig databases Easy Analysis**: Spreadsheet-ready format Potential Use Cases Content Creators**: Market rate research Freelance Platforms**: Competitive intelligence Marketing Agencies**: UGC trend analysis Recruitment Specialists**: Talent pool mapping Business Strategists**: Market opportunity identification Setup Requirements Fiverr Search Configuration Targeted search keywords Specific UGC categories Web Scraping Preparation User-agent rotation strategy Potential proxy configuration Robust error handling Google Sheets Setup Connected Google account Prepared spreadsheet Appropriate sharing permissions n8n Installation Cloud or self-hosted instance Import workflow configuration Configure API credentials Future Enhancement Suggestions 🤖 AI-powered gig trend analysis 📊 Advanced data visualization 🔔 Real-time price change alerts 🧠 Machine learning market predictions 🌐 Multi-platform gig tracking Ethical Considerations Respect Fiverr's Terms of Service Implement responsible scraping practices Avoid overwhelming target websites Use data for legitimate research purposes Technical Recommendations Implement exponential backoff for requests Use randomized delays between scrapes Maintain flexible CSS selector strategies Consider rate limiting and IP rotation Connect With Me Ready to unlock market insights? 📧 Email: Yaron@nofluff.online 🎥 YouTube: @YaronBeen 💼 LinkedIn: Yaron Been Transform your market research with intelligent, automated workflows!
by Tony Duffy
. Read and store IOT sensor data with the MQTT Trigger and InfluxDB tonyduffy@protonmail.com This workflow is for users wanting a practical example of how to obtain data from remote IOT systems using the MQTT protocol in an n8n environment. The template provides typical n8n node implementation and configuration settings necessary to read and store IOT data. The workflow reads the temperature and humidity data from a remote IOT system in this case a DHT22 sensor connected to a ESP32 micro controller. The data is parsed into the correct JSON format and then ingested in an InfluxDB data bucket. From there the stored temperature and humidity values can be displayed in real time. The workflow can be easily modified to read any MQTT driven device data. Remote IOT Sensor Setup The ESP32 controller with the DHT22 sensor are running on a Wokwi simulator. The simulator uses micro python to publish a MQTT "wokwi-weather" topic with the temperature and humidity payloads to an online Mosquitto MQTT broker. The n8n MQTT trigger node subscribes to the topic on the broker and reads the payload values when any changes are published. The code node then prepares the payload for JSON format. The HTTP request node ingests the data in a InfluxDB bucket How to customise this workflow to your needs Wokwi IOT ESP32 simulator You will need to setup a free account at Wokwi.com Once created search for a project "Micro-Python MQTT Weather Logger (ESP32)" Then when the MQTT weather logger project is open change lines 28 and 29 to the following 28 MQTT_CLIENT_ID = "" 29 MQTT_BROKER = "test.mosquitto.org" You then can start the simulation by clicking on the green arrow and it will connect the mosquitto broker and the "wokwi-weather" topic will be published. By clicking on the DHT22 sensor the temperature and humidity bar will appear and you can change the values to send updated payload values to the broker. InfluxDB You will require access to functioning InfluxDB database to utilise this workflow Note : You will have to provide the following for the HTTP request node to connect to InfluxDB. The URL and port of the desired InfluxDB (In this case the InfluxDB is running locally on port 8086 ie. http://localhost:8086.) InfluxDB bucket for the data. ( In this case the created bucket name is "wokwi-data") The Organization ID of the InfluxDB. This can be obtained for the InfluxDB admin page A generated API token to read and write to the InfluxDB bucket. Created from the InfluxDB admin n8n workflow. The MQTT trigger node is configured to subscribe to the "wokwi-weather" topic on the test Mosquitto MQTT broker. It reads the temperature and humidity data sent by ESP32. The code node uses Javascript to move the temperature and humidity payloads to JSON format. This is flexible and can easily modified. The HTTP request node posts the JSON payloads to the InfluxDB bucket. When the above is configured the workflow should function correctly. Thanks to the many who have downloaded this template. Let me know on what you would like to build. Contact me at tonyduffy@protonmail.com