by Lucas Peyrin
How it works Ever had binary data (like images, PDFs, or files) disappear in your n8n workflow after an intermediate node processed it? This workflow provides a powerful solution by demonstrating how to re-access and re-attach binary data from any previous node, even if it was dropped along the way. Think of it like having a reliable backup copy of your file always available, no matter what happens to the original as it moves through your workflow. Here's how this template works step-by-step: Initial Binary Fetch: The workflow starts by fetching a binary image (the n8n logo) from a URL using an HTTP Request node. This is our original binary data. Simulated Data Loss: A Set node then processes this data. Crucially, by default, Set nodes (and many others) do not pass binary data to subsequent nodes. This step intentionally simulates a common scenario where your binary data might seem to "disappear" from the workflow's output. Re-Access and Re-Attach: The core of the solution is a Code node. It uses a specific n8n expression ($(nodeName).item) to reach back to the original node that produced the binary data (Get n8n Logo (Binary)). It then retrieves that binary data and uses this.helpers.prepareBinaryData() to correctly re-attach it to the current item, making it available for all subsequent nodes. Set up steps Setup time: 0 minutes! This is a self-contained tutorial workflow, so no external accounts or credentials are required. Simply click the "Execute Workflow" button to run it. Observe the output of the Re-Access Binary Data from Previous Node to see the binary data successfully re-attached. Important for Customization: If you adapt this technique to your own workflows, remember to update the previousNodeName variable within the Re-Access Binary Data from Previous Node (Code node) to match the exact name of the node that originally produced the binary data you wish to retrieve.
by Qandil
What it does A CI/CD quality gate that blocks deployments when WAF protection is insufficient. Your pipeline sends a webhook with the target URL, the workflow runs WAFtester scans, and returns a pass/fail HTTP response the pipeline can gate on. About WAFtester WAFtester is an open-source CLI for testing Web Application Firewalls. It ships 27 MCP tools, 2,800+ attack payloads across 18 categories (SQLi, XSS, SSRF, SSTI, command injection, XXE, and more), detection signatures for 26 WAF vendors and 9 CDNs, and enterprise-grade assessment with F1/MCC scoring and letter grades (A+ through F). GitHub: github.com/waftester/waftester Docs: Installation | Examples | Commands Who it's for DevOps teams** enforcing security gates in CI/CD Platform engineers** automating deployment approvals Security teams** requiring pre-deploy WAF validation How it works The workflow has seven nodes: Webhook β Receives a POST with {"target": "https://staging.example.com", "categories": ["sqli", "xss"]} Detect WAF β Calls WAFtester's detect_waf tool to fingerprint the WAF vendor Start Scan β Launches an async scan task with the requested attack categories Wait β Pauses to let the scan run Poll Results β Calls get_task_status to retrieve completed results Evaluate β Compares the detection rate against WAF_PASS_THRESHOLD Respond β Returns HTTP 200 (pass, deploy allowed) or HTTP 422 (fail, deploy blocked) with bypass details CI/CD integration example In your pipeline RESPONSE=$(curl -s -w "%{http_code}" -o body.json \ -X POST https://your-n8n/webhook/waf-gate \ -H "Content-Type: application/json" \ -d '{"target": "https://staging.example.com", "categories": ["sqli", "xss"]}') if [ "$RESPONSE" != "200" ]; then echo "WAF gate failed"; exit 1; fi
by Milan Vasarhelyi - SmoothWork
Video Introduction Want to automate your inbox or need a custom workflow? π Book a Call | π¬ DM me on Linkedin Overview This workflow automates invoice creation in QuickBooks Online by importing data directly from a Google Sheet. Instead of manually entering invoice details one by one, this template reads structured data from your spreadsheet and automatically generates corresponding invoices in QuickBooks, saving time and reducing data entry errors. Key Features Automatically reads invoice data from Google Sheets including customer IDs, descriptions, and amounts Creates properly formatted invoices in QuickBooks Online with line items Eliminates manual data entry and reduces human error Scalable solution for processing multiple invoices at once Common Use Cases Batch invoice generation from sales or order data Automated billing workflows for recurring services Syncing invoice data from external systems via Google Sheets Streamlining accounting processes for small businesses Setup and Configuration QuickBooks Developer Account: Register at developer.intuit.com and create a new app in the App dashboard Select 'Accounting' scope permissions for your application Copy your Client ID and Client Secret from the Keys & Credentials section Add the n8n OAuth redirect URL to your app's authorized redirect URIs In n8n, create a QuickBooks Online OAuth2 credential using your Client ID and Secret Set Environment to 'Sandbox' for testing or 'Production' for live data Click 'Connect my account' and authorize the connection Google Sheets Setup: Connect your Google Sheets account in n8n using OAuth2 authentication Update the 'Config - Sheet URL' node with your Google Sheets URL Your sheet must contain these columns: CustomerId (QuickBooks customer ID), Description (line item description), and Amount (invoice amount) Invoice Customization: In the 'Create Invoice in QuickBooks' node, adjust the itemId and Qty fields to match your QuickBooks accounting setup and product catalog.
by Madame AI
Automate Directory Scraping to Google Sheets using BrowserAct This n8n template helps you generate local business leads by automatically scraping online directories and saving the results directly to a spreadsheet. This workflow is perfect for sales teams, marketing agencies, or anyone looking to build a list of local business leads by scraping online directories like YP.com. Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. How it works The workflow is triggered manually. You can set the business_category and city_location inputs in the "Run a workflow task" node. A BrowserAct node initiates the web scraping task on your BrowserAct account using the template specified. A second BrowserAct node ("Get details of a workflow task") patiently waits for the scraping job to finish before allowing the workflow to proceed. A Code node takes the raw output from the scraper (which is a single JSON string) and correctly parses it, splitting the data into individual items for each business. Finally, a Google Sheets node appends or updates each business as a new row in your spreadsheet, matching on "Company Name" to prevent duplicates. Requirements BrowserAct** API account for web scraping BrowserAct* "Online Directory Lead Scraper (YP.com)*" Template BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) Google Sheets** credentials for saving the leads Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node Workflow Guidance and Showcase STOP Manual Leads! Automate Lead Gen with BrowserAct & n8n
by Omar Kennouche
How it works Triggers manually or on schedule (03:00 daily by default) Fetches workflows tagged backup-workflows via n8n API Normalizes workflow names and applies [client: NAME] tag convention Prepares JSON in the same structure as an n8n UI export Checks GitLab repository: Create new file if missing Update file if content differs Skip if unchanged Logs results with recap (created, updated, unchanged, total) Set up steps Configure your GitLab credentials in n8n Create a repository and branch for workflow backups Set global variables (owner, project, branch, backup path) Tag workflows to include with backup-workflows Run manually once to test, then enable the schedule
by Robert Breen
Automate company enrichment directly in Google Sheets using Dun & Bradstreet (D&B) Data Blocks. This workflow reads DUNS numbers from a sheet, fetches a Bearer token (via Basic Auth β /v3/token), calls the Data Blocks API for each row (/v1/data/duns/...), extracts Paydex, and appends or updates the sheet. A Filter node skips rows already marked Complete for efficient, idempotent runs. β What this template does Pulls DUNS values from a Google Sheet (Option A) Uses an HTTP Header Auth credential for D&B, or (Option B) Dynamically fetches a Bearer token from /v3/token (Basic Auth) Calls D&B Data Blocks per row to retrieve payment insights Extracts Paydex and upserts results back to the sheet Skips rows already Complete π€ Who's it for RevOps/Data teams enriching company lists at scale SDR/Marketing teams validating firmographic/credit signals BI/Automation builders who want a no-code/low-code enrichment loop π§© How it works (node-by-node) Get Companies (Google Sheets) β Reads rows with at least duns, paydex, Complete. Only New Rows (Filter) β Passes only rows where Complete is empty. D&B Info (HTTP Request) β Calls Data Blocks for each DUNS using a header credential (Authorization: Bearer <token>). Keep Score (Set) β Maps nested JSON to a single Paydex field: {{$json.organization.businessTrading[0].summary[0].paydexScoreHistory[0].paydexScore}} Append to g-sheets (Google Sheets) β Append or Update by duns, writing paydex and setting Complete = Yes. > The workflow also includes Sticky Notes with in-canvas setup help. π οΈ Setup instructions (from the JSON) 1) Connect Google Sheets (OAuth2) In n8n β Credentials β New β Google Sheets (OAuth2) and sign in. Use/prepare a sheet with columns like: duns, paydex, Complete. In your Google Sheets nodes, select your credential and target spreadsheet/tab. For upsert behavior, set Operation to Append or Update and Matching column to duns. > Replace any example Sheet IDs/URLs with your own (avoid publishing private IDs). 2) Get a D&B Bearer Token (Basic Auth β /v3/token) β Optional Dynamic Token Node Add/enable HTTP Request node named Get Bearer Token1. Configure: Authentication: Basic Auth (your D&B username/password) Method: POST URL: https://plus.dnb.com/v3/token Body Parameters: grant_type = client_credentials Headers: Accept = application/json Execute to receive access_token. Reference the token in other nodes via: Authorization: Bearer {{$node["Get Bearer Token1"].json["access_token"]}} > β οΈ Security: Don't hardcode tokens. Prefer credentials or fetch dynamically. 3) Call D&B Data Blocks (use Header Auth or dynamic token) Node: D&B Info (HTTP Request) Authentication:** Header Auth (recommended) URL:** https://plus.dnb.com/v1/data/duns/{{ $json.duns }}?blockIDs=paymentinsight_L4_v1&tradeUp=hq&customerReference=customer%20reference%20text&orderReason=6332 Headers:** Accept = application/json If not using a stored Header Auth credential, set: Authorization = Bearer {{$node["Get Bearer Token1"].json["access_token"]}} > {{ $json.duns }} is resolved from the current row provided by Get Companies. 4) Map Paydex and Upsert to Google Sheets Keep Score (Set)** Field Paydex (Number): {{$json.organization.businessTrading[0].summary[0].paydexScoreHistory[0].paydexScore}} Append to g-sheets (Google Sheets)** Operation: Append or Update Matching column: duns Columns mapping: duns = {{ $('Get Companies').item.json.duns }} paydex = {{ $json.Paydex }} Complete = Yes π§ͺ Test checklist Add a few test DUNS rows (leave Complete blank). Run the workflow and confirm Only New Rows passes expected items. Check D&B Info returns payment insight data. Confirm Paydex is set and the row is updated with Complete = Yes. π Security & best practices Store secrets in Credentials (HTTP Header Auth/Basic Auth). Avoid publishing real Sheet IDs or tokens in screenshots/notes. Consider rate limits and backoff for large sheets. Log/handle API errors (e.g., invalid DUNS or expired tokens). π©Ή Troubleshooting 401/403 from D&B:** Verify credentials/token; ensure correct environment and entitlements. Missing Paydex path:** D&B responses vary by subscription/data availabilityβadd guards (IF node) before mapping. Rows not updating:* Confirm *Append or Update* is used and *Matching column** exactly matches your sheet header duns. Filtered out rows:** Ensure Complete is truly empty (no spaces) for new items. π§― Customize further Enrich additional fields (e.g., viability score, portfolio comparison, credit limits). Add retry logic, batching, or scheduled triggers. Push results to a CRM/DB or notify teams via Slack/Email. π¬ Contact Need help customizing this (e.g., enriching more fields, normalizing responses, or bulk-processing large sheets)? π§ robert@ynteractive.com π https://www.linkedin.com/in/robert-breen-29429625/ π https://ynteractive.com
by Chris from HRX
SAP SuccessFactors via SAML 2.0 Bearer Assertion based on OAuth 2 configuration How it works Configure SuccessFactors OAuth2 SAML client within SuccessFactors Get SAML assertion, exchange for bearer token for API connection Fetch PerPerson data via OData v2 Flatten to person-employment records as an example call to SuccessFactors API Set up steps It takes ~10-20 minutes to register OAuth2 client in SuccessFactors Admin Center and setup the flow with the credentials. The configuration nodes have to be filled with your URLs and credentials.
by Linked API
Automate your LinkedIn outreach with this n8n workflow powered by Linked API. Sends connection requests, monitors acceptance, delivers your pre-written messages, and follows up automatically β all tracked in Google Sheets. How it works Connection phase β Sends connection requests to leads with status NEW, respects daily limits Monitoring β Checks if connections were accepted, expired, or declined Messaging β Sends your pre-written messages (up to 3) after connection is accepted Follow-up β Automatically follows up if no reply, marks leads as NO_RESPONSE after timeout Tracking β All statuses and timestamps are saved to Google Sheets Setup steps Copy the Google Sheet template Connect credentials: Google Sheets (OAuth2) Linked API (get key here) Configure DOCUMENT_LINK in the Config node (paste your spreadsheet URL) Add leads with Status = NEW and fill in Message 1, Message 2, Message 3 Activate the workflow Configuration | Setting | Default | Description | |---------|---------|-------------| | DOCUMENT_LINK | β | URL to your Google Sheet | | SHEET_NAME | Leads | Name of the sheet with leads | | DAILY_CONNECTION_LIMIT | 25 | Max connection requests per day | | HOURS_TO_CHECK_IF_CONNECTION_ACCEPTED | 24 | Check frequency for connection acceptance | | HOURS_TO_CHECK_IF_REPLIED | 4 | Check frequency for message replies | | HOURS_DELAY_AFTER_CONNECTION_ACCEPTED | 24 | Delay before first message | | DAYS_DELAY_BETWEEN_MESSAGES | 2 | Delay between follow-ups | | DAYS_WAIT_FOR_CONNECTION_ACCEPTANCE | 10 | Timeout for connection requests | | DAYS_WAIT_AFTER_LAST_MESSAGE | 4 | Days to wait after last message before marking as no response |
by Shahrear
Automatically process healthcare claims into structured Google Sheets entries with VLM Run extraction What this workflow does Monitors Google Drive for new files in a target folder Downloads the file inside n8n for processing Sends the file to VLM Run for AI transcription or analysis Fetches extra details from the healthcare.claims-processing domain as JSON Appends normalized fields to a Google Sheet as a new row Setup Prerequisites: Google account, VLM Run API credentials, Google Sheets access, n8n. Install the verified VLM Run node by searching for VLM Run in the node list, then click Install. Once installed, you can start using it in your workflows. Quick Setup: Create the Drive folder you want to watch and copy its Folder ID Create a Google Sheet with headers like: timestamp, file\_name, file\_id, mime\_type, size\_bytes, uploader\_email, form\_type, carrier\_name, patient\_name, patient\_birth\_date, patient\_sex, patient\_address, insurance\_type, insurance\_id, insured\_name, total\_charge, amount\_due, amount\_paid, hospitalization\_from, hospitalization\_to, referring\_physician\_name, processing\_notes, β¦other claim fields as needed Configure Google Drive OAuth2 for the trigger and download nodes Add VLM Run API credentials from https://app.vlm.run/dashboard to the VLM Run node Configure Google Sheets OAuth2 and set Spreadsheet ID and target sheet tab Test by uploading a sample file to the watched Drive folder and activate Perfect for Centralized intake of healthcare claim documents with instant AI summaries Claims and operations teams collecting structured claim insights Customer support attachments that need quick triage to a Sheet Compliance and audit logs for claim documents Key Benefits End to end automation from Drive to Sheets Accurate AI output via VLM Run with optional timestamps Domain enrichment from healthcare.claims-processing JSON Clean, searchable logs in Google Sheets No manual steps after activation How to customize Extend by adding: OCR tuning and field validation for claim forms Per type routing for PDFs, images, or scanned forms Slack notifications on each new Sheet append Keyword extraction and auto tagging for claim categories Error branch that records failures to a second Sheet
by Shahrear
Automatically process Construction Blueprints into structured Google Sheets entries with VLM extraction What this workflow does Monitors Google Drive for new blueprints in a target folder Downloads the file inside n8n for processing Sends the file to VLM Run for VLM analysis Fetches details from the construction.blueprint domain as JSON Appends normalized fields to a Google Sheet as a new row Setup Prerequisites: Google account, VLM Run API credentials, Google Sheets access, n8n. Install the verified VLM Run node by searching for VLM Run in the node list, then click Install. Once installed, you can start using it in your workflows. Quick Setup: Create the Drive folder you want to watch and copy its Folder ID Create a Google Sheet with headers like: timestamp, file_name, file_id, mime_type, size_bytes, uploader_email, document_type, document_number, issue_date, author_name, drawing_title_numbers, revision_history, job_name, address, drawing_number, revision, drawn_by, checked_by, scale_information, agency_name, document_title, blueprint_id, blueprint_status, blueprint_owner, blueprint_url Configure Google Drive OAuth2 for the trigger and download nodes Add VLM Run API credentials from https://app.vlm.run/dashboard to the VLM Run node Configure Google Sheets OAuth2 and set Spreadsheet ID and target sheet tab Test by uploading a sample file to the watched Drive folder and activate Perfect for Converting uploaded construction blueprint documents into clean text Organizing extracted blueprint details into structured sheets Quickly accessing key attributes from technical files Centralized archive of blueprint-to-text conversions Key Benefits End to end automation** from Drive upload to structured Sheet entry Accurate text extraction** of construction blueprint documents Organized attribute mapping** for consistent records Searchable archives** directly in Google Sheets Hands-free processing** after setup How to customize Extend by adding: Version control that links revisions of the same drawing and highlights superseded rows Confidence scores per extracted field with threshold-based routing to manual or AI review Auto-generate a human-readable summary column for quick scanning of blueprint details Split large multi-sheet PDFs into per-drawing rows with individual attributes Cross-system sync to Procore, Autodesk Construction Cloud, or BIM 360 for project-wide visibility
by Harshil Agrawal
This workflow allows you to add a new user to your Notion database when an invite gets created via Calendly. Calendly Trigger node: The Calendly node will trigger the workflow when an invite gets created. Notion node: This node will create a new record using the information received from the previous node.
by Vytenis
Fully automate deep research from start to finish: scrape Google Search results, select relevant sources, scrape & analyze each source in parallel, and generate a comprehensive research report. Who is this for? This workflow is for anyone who needs to research topics quickly and thoroughly: content creators, marketers, product managers, researchers, journalists, students, or anyone seeking deep insights without spending hours browsing websites. If you find yourself opening dozens of browser tabs to piece together information, this template will automate that entire process and deliver comprehensive reports in minutes. How it works Submit your research questions through n8n's chat interface (include as much context as you need) AI generates strategic search queries to explore different angles of your topic (customize the number of queries as needed) Oxylabs scrapes Google Search results for each query (up to 50 results per query) AI evaluates and selects sources that are the most relevant and authoritative Content extraction runs in parallel as Oxylabs scrapes each source and AI extracts key insights Summaries are collected in n8n's data table for final processing AI synthesizes everything into a comprehensive research report with actionable insights See the complete step-by-step tutorial on the n8n blog. Requirements Oxylabs AI Studio API key** β Get a free API key with 1000 credits OpenAI API key** (or use alternatives like Claude, Gemini, and local Ollama LLMs) Setup Install Oxylabs AI Studio as shown on this page Set your API keys: Oxylabs AI Studio OpenAI Create a data table Select the table name in each data table node Create a sub-workflow: Select the 3 nodes (Scrape content, Summarize content, Insert row) Right-click Select βConvert 3 nodes to sub-workflowβ Edit the sub-workflow settings for for parallel execution: Mode: Run once for each item Options β Add Option β disable βWait For Sub-Workflow Completionβ Once you finish all these setup steps, you can run the workflow through n8n's chat interface. For example, send the following message: I'm planning to build a wooden summer house and would appreciate guidance on the process. What are the key considerations I should keep in mind from planning through completion? I'm particularly interested in the recommended construction steps and which materials will ensure long-term durability and quality. Customize this workflow for your needs Feel free to modify the workflow to fit the scale and final output your project requires: To reuse this workflow, clear the data table after the final analysis by adding a Data table node with the Delete row(s) action Scale up** by processing more search queries, increasing results per query beyond 10, and selecting additional relevant URLs Enable JavaScript rendering** in Oxylabs AI Studio (Scraper) node to ensure all content is gathered Adjust the system prompts** in LLM nodes to fit your specific research goals Explore other AI Studio apps** like Browser Agent for interactive browser control or Crawler for mapping entire websites Connect other nodes** like Google Sheets, Notion, Airtable, or webhooks to route results where you need them