by Robert Breen
Automate company enrichment directly in Google Sheets using Dun & Bradstreet (D&B) Data Blocks. This workflow reads DUNS numbers from a sheet, fetches a Bearer token (via Basic Auth โ /v3/token), calls the Data Blocks API for each row (/v1/data/duns/...), extracts Paydex, and appends or updates the sheet. A Filter node skips rows already marked Complete for efficient, idempotent runs. โ What this template does Pulls DUNS values from a Google Sheet (Option A) Uses an HTTP Header Auth credential for D&B, or (Option B) Dynamically fetches a Bearer token from /v3/token (Basic Auth) Calls D&B Data Blocks per row to retrieve payment insights Extracts Paydex and upserts results back to the sheet Skips rows already Complete ๐ค Who's it for RevOps/Data teams enriching company lists at scale SDR/Marketing teams validating firmographic/credit signals BI/Automation builders who want a no-code/low-code enrichment loop ๐งฉ How it works (node-by-node) Get Companies (Google Sheets) โ Reads rows with at least duns, paydex, Complete. Only New Rows (Filter) โ Passes only rows where Complete is empty. D&B Info (HTTP Request) โ Calls Data Blocks for each DUNS using a header credential (Authorization: Bearer <token>). Keep Score (Set) โ Maps nested JSON to a single Paydex field: {{$json.organization.businessTrading[0].summary[0].paydexScoreHistory[0].paydexScore}} Append to g-sheets (Google Sheets) โ Append or Update by duns, writing paydex and setting Complete = Yes. > The workflow also includes Sticky Notes with in-canvas setup help. ๐ ๏ธ Setup instructions (from the JSON) 1) Connect Google Sheets (OAuth2) In n8n โ Credentials โ New โ Google Sheets (OAuth2) and sign in. Use/prepare a sheet with columns like: duns, paydex, Complete. In your Google Sheets nodes, select your credential and target spreadsheet/tab. For upsert behavior, set Operation to Append or Update and Matching column to duns. > Replace any example Sheet IDs/URLs with your own (avoid publishing private IDs). 2) Get a D&B Bearer Token (Basic Auth โ /v3/token) โ Optional Dynamic Token Node Add/enable HTTP Request node named Get Bearer Token1. Configure: Authentication: Basic Auth (your D&B username/password) Method: POST URL: https://plus.dnb.com/v3/token Body Parameters: grant_type = client_credentials Headers: Accept = application/json Execute to receive access_token. Reference the token in other nodes via: Authorization: Bearer {{$node["Get Bearer Token1"].json["access_token"]}} > โ ๏ธ Security: Don't hardcode tokens. Prefer credentials or fetch dynamically. 3) Call D&B Data Blocks (use Header Auth or dynamic token) Node: D&B Info (HTTP Request) Authentication:** Header Auth (recommended) URL:** https://plus.dnb.com/v1/data/duns/{{ $json.duns }}?blockIDs=paymentinsight_L4_v1&tradeUp=hq&customerReference=customer%20reference%20text&orderReason=6332 Headers:** Accept = application/json If not using a stored Header Auth credential, set: Authorization = Bearer {{$node["Get Bearer Token1"].json["access_token"]}} > {{ $json.duns }} is resolved from the current row provided by Get Companies. 4) Map Paydex and Upsert to Google Sheets Keep Score (Set)** Field Paydex (Number): {{$json.organization.businessTrading[0].summary[0].paydexScoreHistory[0].paydexScore}} Append to g-sheets (Google Sheets)** Operation: Append or Update Matching column: duns Columns mapping: duns = {{ $('Get Companies').item.json.duns }} paydex = {{ $json.Paydex }} Complete = Yes ๐งช Test checklist Add a few test DUNS rows (leave Complete blank). Run the workflow and confirm Only New Rows passes expected items. Check D&B Info returns payment insight data. Confirm Paydex is set and the row is updated with Complete = Yes. ๐ Security & best practices Store secrets in Credentials (HTTP Header Auth/Basic Auth). Avoid publishing real Sheet IDs or tokens in screenshots/notes. Consider rate limits and backoff for large sheets. Log/handle API errors (e.g., invalid DUNS or expired tokens). ๐ฉน Troubleshooting 401/403 from D&B:** Verify credentials/token; ensure correct environment and entitlements. Missing Paydex path:** D&B responses vary by subscription/data availabilityโadd guards (IF node) before mapping. Rows not updating:* Confirm *Append or Update* is used and *Matching column** exactly matches your sheet header duns. Filtered out rows:** Ensure Complete is truly empty (no spaces) for new items. ๐งฏ Customize further Enrich additional fields (e.g., viability score, portfolio comparison, credit limits). Add retry logic, batching, or scheduled triggers. Push results to a CRM/DB or notify teams via Slack/Email. ๐ฌ Contact Need help customizing this (e.g., enriching more fields, normalizing responses, or bulk-processing large sheets)? ๐ง robert@ynteractive.com ๐ https://www.linkedin.com/in/robert-breen-29429625/ ๐ https://ynteractive.com
by Milan Vasarhelyi - SmoothWork
Video Introduction Want to automate your inbox or need a custom workflow? ๐ Book a Call | ๐ฌ DM me on Linkedin Overview This workflow automates invoice creation in QuickBooks Online by importing data directly from a Google Sheet. Instead of manually entering invoice details one by one, this template reads structured data from your spreadsheet and automatically generates corresponding invoices in QuickBooks, saving time and reducing data entry errors. Key Features Automatically reads invoice data from Google Sheets including customer IDs, descriptions, and amounts Creates properly formatted invoices in QuickBooks Online with line items Eliminates manual data entry and reduces human error Scalable solution for processing multiple invoices at once Common Use Cases Batch invoice generation from sales or order data Automated billing workflows for recurring services Syncing invoice data from external systems via Google Sheets Streamlining accounting processes for small businesses Setup and Configuration QuickBooks Developer Account: Register at developer.intuit.com and create a new app in the App dashboard Select 'Accounting' scope permissions for your application Copy your Client ID and Client Secret from the Keys & Credentials section Add the n8n OAuth redirect URL to your app's authorized redirect URIs In n8n, create a QuickBooks Online OAuth2 credential using your Client ID and Secret Set Environment to 'Sandbox' for testing or 'Production' for live data Click 'Connect my account' and authorize the connection Google Sheets Setup: Connect your Google Sheets account in n8n using OAuth2 authentication Update the 'Config - Sheet URL' node with your Google Sheets URL Your sheet must contain these columns: CustomerId (QuickBooks customer ID), Description (line item description), and Amount (invoice amount) Invoice Customization: In the 'Create Invoice in QuickBooks' node, adjust the itemId and Qty fields to match your QuickBooks accounting setup and product catalog.
by Omar Kennouche
How it works Triggers manually or on schedule (03:00 daily by default) Fetches workflows tagged backup-workflows via n8n API Normalizes workflow names and applies [client: NAME] tag convention Prepares JSON in the same structure as an n8n UI export Checks GitLab repository: Create new file if missing Update file if content differs Skip if unchanged Logs results with recap (created, updated, unchanged, total) Set up steps Configure your GitLab credentials in n8n Create a repository and branch for workflow backups Set global variables (owner, project, branch, backup path) Tag workflows to include with backup-workflows Run manually once to test, then enable the schedule
by Jonathan
This workflow creates a project in Clockify that any user can track time against. Syncro should be setup with a webhook via Notification Set for Ticket - created (for anyone). > This workflow is part of an MSP collection, The original can be found here: https://github.com/bionemesis/n8nsyncro
by Tom
This easy-to-extend workflow automatically serves a static HTML page when a URL is accessed in a browser. Prerequisites Basic knowledge of HTML Nodes Webhook node triggers the workflow on an incoming request. Respond to Webhook node serves the HTML page in response to the webhook.
by Vytenis
Fully automate deep research from start to finish: scrape Google Search results, select relevant sources, scrape & analyze each source in parallel, and generate a comprehensive research report. Who is this for? This workflow is for anyone who needs to research topics quickly and thoroughly: content creators, marketers, product managers, researchers, journalists, students, or anyone seeking deep insights without spending hours browsing websites. If you find yourself opening dozens of browser tabs to piece together information, this template will automate that entire process and deliver comprehensive reports in minutes. How it works Submit your research questions through n8n's chat interface (include as much context as you need) AI generates strategic search queries to explore different angles of your topic (customize the number of queries as needed) Oxylabs scrapes Google Search results for each query (up to 50 results per query) AI evaluates and selects sources that are the most relevant and authoritative Content extraction runs in parallel as Oxylabs scrapes each source and AI extracts key insights Summaries are collected in n8n's data table for final processing AI synthesizes everything into a comprehensive research report with actionable insights See the complete step-by-step tutorial on the n8n blog. Requirements Oxylabs AI Studio API key** โ Get a free API key with 1000 credits OpenAI API key** (or use alternatives like Claude, Gemini, and local Ollama LLMs) Setup Install Oxylabs AI Studio as shown on this page Set your API keys: Oxylabs AI Studio OpenAI Create a data table Select the table name in each data table node Create a sub-workflow: Select the 3 nodes (Scrape content, Summarize content, Insert row) Right-click Select โConvert 3 nodes to sub-workflowโ Edit the sub-workflow settings for for parallel execution: Mode: Run once for each item Options โ Add Option โ disable โWait For Sub-Workflow Completionโ Once you finish all these setup steps, you can run the workflow through n8n's chat interface. For example, send the following message: I'm planning to build a wooden summer house and would appreciate guidance on the process. What are the key considerations I should keep in mind from planning through completion? I'm particularly interested in the recommended construction steps and which materials will ensure long-term durability and quality. Customize this workflow for your needs Feel free to modify the workflow to fit the scale and final output your project requires: To reuse this workflow, clear the data table after the final analysis by adding a Data table node with the Delete row(s) action Scale up** by processing more search queries, increasing results per query beyond 10, and selecting additional relevant URLs Enable JavaScript rendering** in Oxylabs AI Studio (Scraper) node to ensure all content is gathered Adjust the system prompts** in LLM nodes to fit your specific research goals Explore other AI Studio apps** like Browser Agent for interactive browser control or Crawler for mapping entire websites Connect other nodes** like Google Sheets, Notion, Airtable, or webhooks to route results where you need them
by Shahrear
Automatically process Construction Blueprints into structured Google Sheets entries with VLM extraction What this workflow does Monitors Google Drive for new blueprints in a target folder Downloads the file inside n8n for processing Sends the file to VLM Run for VLM analysis Fetches details from the construction.blueprint domain as JSON Appends normalized fields to a Google Sheet as a new row Setup Prerequisites: Google account, VLM Run API credentials, Google Sheets access, n8n. Install the verified VLM Run node by searching for VLM Run in the node list, then click Install. Once installed, you can start using it in your workflows. Quick Setup: Create the Drive folder you want to watch and copy its Folder ID Create a Google Sheet with headers like: timestamp, file_name, file_id, mime_type, size_bytes, uploader_email, document_type, document_number, issue_date, author_name, drawing_title_numbers, revision_history, job_name, address, drawing_number, revision, drawn_by, checked_by, scale_information, agency_name, document_title, blueprint_id, blueprint_status, blueprint_owner, blueprint_url Configure Google Drive OAuth2 for the trigger and download nodes Add VLM Run API credentials from https://app.vlm.run/dashboard to the VLM Run node Configure Google Sheets OAuth2 and set Spreadsheet ID and target sheet tab Test by uploading a sample file to the watched Drive folder and activate Perfect for Converting uploaded construction blueprint documents into clean text Organizing extracted blueprint details into structured sheets Quickly accessing key attributes from technical files Centralized archive of blueprint-to-text conversions Key Benefits End to end automation** from Drive upload to structured Sheet entry Accurate text extraction** of construction blueprint documents Organized attribute mapping** for consistent records Searchable archives** directly in Google Sheets Hands-free processing** after setup How to customize Extend by adding: Version control that links revisions of the same drawing and highlights superseded rows Confidence scores per extracted field with threshold-based routing to manual or AI review Auto-generate a human-readable summary column for quick scanning of blueprint details Split large multi-sheet PDFs into per-drawing rows with individual attributes Cross-system sync to Procore, Autodesk Construction Cloud, or BIM 360 for project-wide visibility
by Shahrear
Automatically process healthcare claims into structured Google Sheets entries with VLM Run extraction What this workflow does Monitors Google Drive for new files in a target folder Downloads the file inside n8n for processing Sends the file to VLM Run for AI transcription or analysis Fetches extra details from the healthcare.claims-processing domain as JSON Appends normalized fields to a Google Sheet as a new row Setup Prerequisites: Google account, VLM Run API credentials, Google Sheets access, n8n. Install the verified VLM Run node by searching for VLM Run in the node list, then click Install. Once installed, you can start using it in your workflows. Quick Setup: Create the Drive folder you want to watch and copy its Folder ID Create a Google Sheet with headers like: timestamp, file\_name, file\_id, mime\_type, size\_bytes, uploader\_email, form\_type, carrier\_name, patient\_name, patient\_birth\_date, patient\_sex, patient\_address, insurance\_type, insurance\_id, insured\_name, total\_charge, amount\_due, amount\_paid, hospitalization\_from, hospitalization\_to, referring\_physician\_name, processing\_notes, โฆother claim fields as needed Configure Google Drive OAuth2 for the trigger and download nodes Add VLM Run API credentials from https://app.vlm.run/dashboard to the VLM Run node Configure Google Sheets OAuth2 and set Spreadsheet ID and target sheet tab Test by uploading a sample file to the watched Drive folder and activate Perfect for Centralized intake of healthcare claim documents with instant AI summaries Claims and operations teams collecting structured claim insights Customer support attachments that need quick triage to a Sheet Compliance and audit logs for claim documents Key Benefits End to end automation from Drive to Sheets Accurate AI output via VLM Run with optional timestamps Domain enrichment from healthcare.claims-processing JSON Clean, searchable logs in Google Sheets No manual steps after activation How to customize Extend by adding: OCR tuning and field validation for claim forms Per type routing for PDFs, images, or scanned forms Slack notifications on each new Sheet append Keyword extraction and auto tagging for claim categories Error branch that records failures to a second Sheet
by Harshil Agrawal
This workflow allows you to add a new user to your Notion database when an invite gets created via Calendly. Calendly Trigger node: The Calendly node will trigger the workflow when an invite gets created. Notion node: This node will create a new record using the information received from the previous node.
by Jan Oberhauser
Simple workflow which allows to receive data from a Google Sheet via "REST" endpoint. Wait for Webhook Call Get data from Google Sheet Return data Example Sheet: https://docs.google.com/spreadsheets/d/17fzSFl1BZ1njldTfp5lvh8HtS0-pNXH66b7qGZIiGRU
by David Olusola
Overview This workflow regularly backs up a Google Sheet by exporting its data and saving it as a new file (CSV or XLSX) in a specified folder within your Google Drive. This ensures data redundancy and historical versions. Use Case: Critical business data backup, audit trails, historical data snapshots. How It Works This workflow operates in three main steps: Scheduled Trigger: A Cron node triggers the workflow at a set interval (e.g., daily, weekly). Read Google Sheet Data: A Google Sheets node reads all data from the specified tab of your target Google Sheet. Upload to Google Drive: A Google Drive node takes the data read from the sheet. It converts the data into a file (e.g., CSV or XLSX format). It then uploads this file to a pre-defined folder in your Google Drive, with a dynamic filename including the date for versioning. Setup Steps To get this workflow up and running, follow these instructions: Step 1: Create Google Sheets and Google Drive Credentials in n8n In your n8n instance, go to Credentials in the left sidebar. Ensure you have a "Google Sheets OAuth2 API" credential set up. If not, create one. Ensure you have a "Google Drive OAuth2 API" credential set up. If not, create one. Make note of their Credential Names. Step 2: Prepare Your Google Sheet and Drive Folder Source Google Sheet: Identify the Google Sheet you want to back up. Copy its Document ID (from the URL). Note the Sheet Name (or GID) of the specific tab you want to back up. Destination Google Drive Folder: Go to your Google Drive (drive.google.com). Create a new folder for your backups (e.g., Google Sheets Backups). Copy the Folder ID from its URL. Step 3: Import the Workflow JSON Step 4: Configure the Nodes Read Google Sheet Data Node: Select your Google Sheets credential. Replace YOUR_SOURCE_GOOGLE_SHEET_ID with the ID of the Google Sheet you want to back up. Replace Sheet1 with the exact name of the tab you want to back up. Upload Backup to Google Drive Node: Select your Google Drive credential. Replace YOUR_DESTINATION_GOOGLE_DRIVE_FOLDER_ID with the ID of the Google Drive folder where you want to store backups. File Type: The fileType is set to csv. You can change this to xlsx if you prefer an Excel format for the backup (though CSV is often simpler for raw data backups). Step 5: Activate and Test the Workflow Click the "Activate" toggle button. To test immediately, click "Execute Workflow". Check your Google Drive backup folder. A new file named something like backup_Sheet1_2025-07-26.csv should appear.
by Baptiste Fort
Export Google Search Console Data to Airtable Automatically If youโve ever downloaded CSV files from Google Search Console, opened them in Excel, cleaned the weird formatting, and pasted them into a sheet just to get a simple reportโฆ this workflow is made for you. Who Is This Workflow For? This automation is perfect for: SEO freelancers and consultants** โ who want to track client performance without wasting time on manual exports. Marketing teams** โ who need fresh daily/weekly reports to check what keywords and pages are performing. Website owners** โ who just want a clean way to see how their site is doing without logging into Google Search Console every day. Basically, if you care about SEO but don't want to babysit CSV files, this workflow is your new best friend. If you need a professional n8n agency to build advanced data automation workflows like this, check out Vision IA's n8n automation services. What Does It Do? Hereโs the big picture: It runs on a schedule (every day, or whenever you want). It fetches data directly from the Google Search Console API. It pulls 3 types of reports: By Query (keywords people used). By Page (URLs that ranked). By Date (daily performance). It splits and cleans the data so itโs human-friendly. It saves everything into Airtable, organized in three tables. End result: every time you open Airtable, you have a neat SEO database with clicks, impressions, CTR, and average position โ no manual work required. Prerequisites Youโll need a few things to get started: Access to Google Search Console. A Google Cloud project with the Search Console API enabled. An Airtable account to store the data. An automation tool that can connect APIs (like the one weโre using here). Thatโs it! Step 1: Schedule the Workflow The very first node in the workflow is the Schedule Trigger. Why?** โ So you donโt have to press โRunโ every day. What it does** โ It starts the whole workflow at fixed times. In the JSON, you can configure things like: Run every day at a specific hour (e.g., 8 AM). Or run every X hours/minutes if you want more frequent updates. This is the alarm clock of your automation โฐ. Step 2: Set Your Domain and Time Range Next, we define the site and the time window for the report. In the JSON, thereโs a Set node with two important parameters: domain โ your website (example: https://www.vvv.fr/). days โ how many days back you want the data (default: 30). ๐ Changing these two values updates the whole workflow. Super handy if you want 7-day reports instead of 30. Step 3: Fetch Data from Google Search Console This is where the workflow talks to the API. There are 3 HTTP Request nodes: Get Query Report Pulls data grouped by search queries (keywords). Parameters in the JSON: startDate = today - 30 days endDate = today dimensions = "query" rowLimit = 25000 (maximum rows the API can return) Get Page Report Same idea, but grouped by page URLs. Parameters: dimensions = "page" Same dates and row limit. Get Date Report This one groups performance by date. Parameters: dimensions = "date" You get a day-by-day performance view. Each request returns rows like this: { "keys": ["example keyword"], "clicks": 42, "impressions": 1000, "ctr": 0.042, "position": 8.5 } Step 4: Split the Data The API sends results in a big array (rows). Thatโs not very usable directly. So we add a Split Out node for each report. What it does: breaks the array into single items โ 1 item per keyword, per page, or per date. This way, each line can be saved individually into Airtable. ๐ Think of it like opening a bag of candy and laying each one neatly on the table ๐ฌ. Step 5: Clean and Rename Fields After splitting, we use Edit Fields nodes to make the data human-friendly. For example: In the Query report โ rename keys[0] into Keyword. In the Page report โ rename keys[0] into page. In the Date report โ rename keys[0] into date. This is also where we keep only the useful fields: Keyword / page / date clicks impressions ctr position Step 6: Save Everything into Airtable Finally, the polished data is sent into Airtable. In the JSON, there are 3 Airtable nodes: Queries table** โ stores all the keywords. Pages table** โ stores all the URLs. Dates table** โ stores day-by-day metrics. Each node is set to: Operation** = Create โ adds a new record. Base** = Search Console Reports. Table** = Queries, Pages, or Dates. Field Mapping For Queries: Keyword โ {{ $json.Keyword }} clicks โ {{ $json.clicks }} impressions โ {{ $json.impressions }} ctr โ {{ $json.ctr }} position โ {{ $json.position }} ๐ Same logic for Pages and Dates, just replace Keyword with page or date. Expected Output Every time this workflow runs: Queries table** fills with fresh keyword performance data. Pages table** shows how your URLs performed. Dates table** tracks the evolution day by day. In Airtable, you now have a complete SEO database with no manual exports. Why This Is Awesome ๐ซ No more messy CSV exports. ๐ Data is always up-to-date. ๐ You can build Airtable dashboards, filters, and interfaces. โ๏ธ Easy to adapt โ just change domain or days to customize. And the best part? You can spend the time you saved on actual SEO improvements instead of spreadsheet gymnastics ๐. Need Help Automating Your Data Workflows? This n8n workflow is perfect for automating SEO reporting and data collection. If you want to go further with document automation, file processing, and data synchronization across your tools, our agency specializes in building custom automation systems. ๐ Explore our document automation services: Vision IA โ Document Automation Agency We help businesses automate their data workflowsโfrom collecting reports to organizing files and syncing information across CRMs, spreadsheets, and databasesโall running automatically. Questions about this workflow or other automation solutions? Visit Vision IA or reach out for a free consultation.