by Shahrear
🧾 Image Extraction Pipeline (Google Drive + VLM Run + n8n) ⚙️ What This Workflow Does This workflow automates the process of extracting images from uploaded documents in Google Drive using the VLM Run Execute Agent, then downloads and saves those extracted images into a designated Drive folder. 🧩 Requirements Google Drive OAuth2 credentials** VLM Run API credentials** with Execute Agent access A reachable n8n Webhook URL (e.g., /image-extract-via-agent) ⚡Quick Setup Configure Google Drive OAuth2 and create upload folder and folder for saving extracted images. Install the verified VLM Run node by searching for VLM Run in the node list, then click Install. Once installed, you can start using it in your workflows. Add VLM Run API credentials for document parsing. ⚙️ How It Works Monitor Uploads – The workflow watches a specific Google Drive folder for new file uploads (e.g., receipts, reports, or PDFs). Download File – When a file is created, it’s automatically downloaded in binary form. Extract Images (VLM Run) – The file is sent to the VLM Run Execute Agent, which analyzes the document and extracts image URLs via its callback. Receive Image Links (Webhook) – The workflow’s Webhook node listens for the agent’s response containing extracted image URLs. Split & Download – The Split Out node processes each extracted link, and the HTTP Request node downloads each image. Save Image – Finally, each image is uploaded to your chosen Google Drive folder for storage or further processing. 💡Why Use This Workflow Manual image extraction from PDFs and scanned files is repetitive and error-prone. This pipeline automates it using VLM Run, a vision-language AI service that: Understands document layout and structure Handles multi-page and mixed-content files Extracts accurate image data with minimal setup. For example- the output contains URLs to extracted images { "image_urls": [ "https://vlm.run/api/files/img1.jpg", "https://vlm.run/api/files/img2.jpg" ] } Works with both images and PDFs 🧠 Perfect For Extracting photos or receipts from multi-page PDFs Archiving embedded images from reports or invoices Preparing image datasets for labeling or ML model training 🛠️ How to Customize You can extend this workflow by: Adding naming conventions or folder structures based on upload type Integrating Slack/Email notifications when extraction completes Including metadata logging (file name, timestamp, source) into Google Sheets or a database Chaining with classification or OCR workflows using VLM Run’s other agents ⚠️ Community Node Disclaimer This workflow uses community nodes (VLM Run) that may need additional permissions and custom setup.
by Hàn Thiên Hải
This workflow helps SEO professionals and website owners automate the tedious process of monitoring and indexing URLs. It fetches your XML sitemap, filters for recent content, checks the current indexing status via Google Search Console (GSC), and automatically submits non-indexed URLs to the Google Indexing API. By integrating a local submission state (Static Data), it ensures that your API quota is used efficiently by preventing redundant submissions within a 7-day window. Key Features Smart Sitemap Support: Handles both standard sitemaps and Sitemap Indexes (nested sitemaps). Status Check: Uses GSC URL Inspection to verify if a page is truly missing from Google's index before taking action. Quota Optimization: Filters content by lastmod date and tracks submission history to stay within Google's API limits. Rate Limiting: Includes built-in batching and delays to comply with Google's API throughput requirements. Prerequisites Google Search Console API: Enabled in your Google Cloud Console. Google Indexing API: Enabled for instant indexing. Service Account: You need a Service Account JSON key with "Owner" permissions in your GSC Property. Setup steps 1. Configure Google Cloud Console Create Project: Go to the Google Cloud Console and create a new project. Enable APIs: Search for and enable both the Google Search Console API and the Web Search Indexing API. Create Service Account: Navigate to APIs & Services > Credentials > Create Credentials > Service Account. Fill in the details and click Done. Generate Key: Select your service account > Edit service account > Keys > Add Key > Create new key > JSON. Save the downloaded file securely. 2. Set Up Credentials in n8n New Credential: In n8n, go to Create credentials > Google Service Account API. Input Data: Paste the Service Account Email and Private Key from your downloaded JSON file. In JSON file, Service Account Email is client_email and Private Key is private_key. HTTP Request Setup: Enable Set up for use in HTTP Request node. Scopes: Enter exactly: https://www.googleapis.com/auth/indexing https://www.googleapis.com/auth/webmasters.readonly into the Scopes field. GSC Permission: Add the Service Account email to your Google Search Console property as an Owner via Settings > Users and permissions > Add user. 3. Workflow Configuration Configuration Node: Open the Configuration node and enter your Sitemap URL and GSC Property URL. If your property type is URL prefix, the URL must end with a forward slash /. Example: https://hanthienhai.com/ Link Credentials: Update the credentials in both the GSC: Inspect URL Status and GSC: Request Indexing nodes with the service account created in Step 2. 4. Schedule & Activate Set Schedule: Adjust the Schedule Trigger node to your preferred execution frequency. Activate: Toggle the workflow to Active to start the automation. Questions or Need Help? For setup assistance, customization, or workflow support, feel free to contact me at admin@hanthienhai.com
by EoCi - Mr.Eo
🎯 What This Does Automatically finds PDF file in Google Drive and extracts information. Use it to pull out clean output. It then formats the output into a clean JSON object. 🔄 How It Works 1. Manual Trigger starts the process. 2. 🔎Find File: "Google Drive" node finds the PDF file/files in a specified folder and downloads it/them. 3. 📝Extract Raw Text: "Extract From File" node pulls the text content from the retrieval file/files. 4. ✅Output Clean Data: "Code" node refines the extracted content and runs custom code for cleaning and final formatting. 🚀Setup Guidelines Setup Requirements Google Drive Account**: A Google Drive with an empty folder or folder that contains PDF file/files that you want to process. API Keys**: Gemini, Google Drive. Set up steps Setup time: < 5 minutes Add Credentials in n8n: Ensure your Google Drive OAuth2 and Google Gemini (PaLM) API credentials are created and connected. Go to Credentials > New to add them if you haven't created yet. Configure the Search Node (Get PDF Files/File): Open the node and select your Google Drive credential. In the "Resource" field, choose File/Folder. In "Search Method" field, select "Search File/Folder Name", In "Search Query" type in *.pdf. Add on 2 filters, in "Folder" filter click on dropdown choose "From List" and connect to the created folder on your google drive. In "What to Search" filter, select file. Add on "Options" (optional): Click on "Add option", choose ("ID" and "Name") Define Extraction Rules (Extract Files/File's Data): Select File Type: Open node and click on the dropdown below "Operation" section, choose "Extract From PDF". Next, in "Input Binary Field" section keep as default "data". Clean & Format Data (Optional): Adjust the Get PDF Data Only node to keep only the fields you need and give them friendly names. Modify the Data Parser & Cleaner node if you need to perform custom transformation. Activate and Run: Save and Activate the workflow. Click "Execute Workflow" to run it manually and check the output. That’s it! Once configured, this workflow becomes your personal data assistant. Run it anytime you need to extract information quickly and accurately, saving you hours of manual work and ensuring your data is always ready to use.
by Wessel Bulte
TenderNed Public Procurement What This Workflow Does This workflow automates the collection of public procurement data from TenderNed (the official Dutch tender platform). It: Fetches the latest tender publications from the TenderNed API Retrieves detailed information in both XML and JSON formats for each tender Parses and extracts key information like organization names, titles, descriptions, and reference numbers Filters results based on your custom criteria Stores the data in a database for easy querying and analysis Setup Instructions This template comes with sticky notes providing step-by-step instructions in Dutch and various query options you can customize. Prerequisites TenderNed API Access - Register at TenderNed for API credentials Configuration Steps Set up TenderNed credentials: Add HTTP Basic Auth credentials with your TenderNed API username and password Apply these credentials to the three HTTP Request nodes: "Tenderned Publicaties" "Haal XML Details" "Haal JSON Details" Customize filters: Modify the "Filter op ..." node to match your specific requirements Examples: specific organizations, contract values, regions, etc. How It Works Step 1: Trigger The workflow can be triggered either manually for testing or automatically on a daily schedule. Step 2: Fetch Publications Makes an API call to TenderNed to retrieve a list of recent publications (up to 100 per request). Step 3: Process & Split Extracts the tender array from the response and splits it into individual items for processing. Step 4: Fetch Details For each tender, the workflow makes two parallel API calls: XML endpoint** - Retrieves the complete tender documentation in XML format JSON endpoint** - Fetches metadata including reference numbers and keywords Step 5: Parse & Merge Parses the XML data and merges it with the JSON metadata and batch information into a single data structure. Step 6: Extract Fields Maps the raw API data to clean, structured fields including: Publication ID and date Organization name Tender title and description Reference numbers (kenmerk, TED number) Step 7: Filter Applies your custom filter criteria to focus on relevant tenders only. Step 8: Store Inserts the processed data into your database for storage and future analysis. Customization Tips Modify API Parameters In the "Tenderned Publicaties" node, you can adjust: offset: Starting position for pagination size: Number of results per request (max 100) Add query parameters for date ranges, status filters, etc. Add More Fields Extend the "Splits Alle Velden" node to extract additional fields from the XML/JSON data, such as: Contract value estimates Deadline dates CPV codes (procurement classification) Contact information Integrate Notifications Add a Slack, Email, or Discord node after the filter to get notified about new matching tenders. Incremental Updates Modify the workflow to only fetch new tenders by: Storing the last execution timestamp Adding date filters to the API query Only processing publications newer than the last run Troubleshooting No data returned? Verify your TenderNed API credentials are correct Check that you have setup youre filter proper Need help setting this up or interested in a complete tender analysis solution? Get in touch 🔗 LinkedIn – Wessel Bulte
by Oneclick AI Squad
This n8n workflow automates the end-to-end proof-of-delivery process for logistics operations. It ingests POD data via webhook—including driver signatures, delivery photos, and GPS coordinates—performs AI-driven verification for package integrity and authenticity, updates ERP systems with delivery status, triggers automated invoicing for verified cases, and handles disputes by creating evidence-backed tickets and alerting teams. Designed for seamless integration, it minimizes errors in billing and reconciliation while accelerating resolution for mismatches. Benefits Reduced Manual Effort:** Automates verification and status updates, cutting processing time from hours to minutes. Enhanced Accuracy:** AI analysis detects damages, location discrepancies, and signature fraud with high confidence scores, preventing billing disputes. Faster Revenue Cycle:** Instant invoicing for verified deliveries improves cash flow and reduces DSO (Days Sales Outstanding). Proactive Dispute Management:** Generates high-priority tickets with linked evidence, enabling quicker resolutions and lower escalation costs. Audit-Ready Traceability:** Logs all decisions, AI outputs, and actions for compliance with logistics standards like ISO 9001. Scalability:** Handles high-volume deliveries without proportional staff increases, supporting growth in e-commerce fulfillment. Useful for Which Industry Logistics & Supply Chain:** Ideal for 3PL providers, freight forwarders, and courier services managing last-mile deliveries. E-Commerce & Retail:** Supports platforms like Amazon or Shopify sellers verifying customer receipts and automating returns. Manufacturing & Distribution:** Streamlines B2B shipments with ERP integrations for just-in-time inventory. Pharmaceuticals & Healthcare:** Ensures tamper-evident deliveries with photo verification for cold-chain compliance. Food & Beverage:** Tracks perishable goods with damage detection to maintain quality assurance. Workflow Process Webhook Intake:** Receives POD submission (driver ID, signature image, delivery photo, recipient, GPS) via POST/GET. Input Validation:** Checks for required fields; branches to error if incomplete. Parallel AI Verification:** AI Vision (OpenAI GPT-4): Analyzes photo for package condition, location match, and damage. Signature Validation: AI checks legitimacy, handwritten authenticity, and completeness. Merge & Decide:** Consolidates results with confidence scoring; routes to verified (true) or dispute (false). Verified Path:** Update ERP: POSTs status, timestamps, and coordinates to delivery system. Trigger Invoicing: Generates billable invoice with POD reference via billing API. Success Response: Returns confirmation to caller. Dispute Path:** Create Ticket: POSTs high-priority support ticket with evidence (images, scores). Alert Team: Notifies dispute team via email/Slack with issue summary and ticket link. Dispute Response: Returns status and next steps to caller. Error Handling:** Returns detailed feedback for invalid inputs. Setup Instructions Import Workflow: Paste JSON into n8n Workflows → Import from Clipboard. Configure Webhook: Set URL for POD submissions (e.g., from mobile apps); test with sample POST data. AI Setup: Add OpenAI API key to vision/signature nodes; specify GPT-4 model. Integrate Systems: Update ERP/billing URLs and auth in update/trigger nodes (e.g., https://your-erp.com/api). Dispute Config: Link support API (e.g., Zendesk) and notification service (e.g., Slack webhook). Threshold Tuning: Adjust confidence scores in decision node (e.g., >85% for auto-approve). Test Run: Execute manually with valid/invalid POD samples; verify ERP updates and ticket creation. Prerequisites n8n instance (v1.50+) with webhook and HTTP request nodes enabled. OpenAI API access for GPT-4 vision (image analysis credits required). ERP/billing APIs with POST endpoints and authentication (e.g., OAuth tokens). Support ticketing system (e.g., Zendesk, Jira) for dispute creation. Secure image storage (e.g., AWS S3) for POD uploads. Basic API testing tools (e.g., Postman) for endpoint validation. Modification Options Add OCR for recipient name extraction from photos in validation step. Integrate geofencing APIs for automated location alerts in AI vision. Support multi-signature PODs for group deliveries by expanding parallel branches. Add partial invoicing logic for mixed verified/disputed items. Incorporate blockchain for immutable POD records in high-value shipments. Extend alerts to SMS via Twilio for on-the-road driver notifications. Build analytics export to Google Sheets for delivery success rates.
by Frederik Duchi
This n8n template demonstrates how to automatically generate personalized calendar views in Baserow, based on a chosen date field and a filter. Having a personalized view with only information that is relevant to you makes it easy to integrate with external calendar tools like Outlook or Google Calendar. Use cases are many: Task management (deadlines per staff member) Customer management (appointments per customer) Inventory management (delivery dates per supplier) Good to know You only need a Date field (e.g., a task deadline, due date, appointment date) and a Link to table field (e.g., a customer, employee, product) to make this work. The generated calendar views can be shared as .ics files and imported into any external calendar application. Authentication is done through a JWT token constructed from your Baserow username and password. How it works Set Baserow credentials**: Allows you to enter your Baserow credentials (username + password) and the API host path. The host is by default https://api.baserow.io, but you can change this in case you are self-hosting. The information is required to generate a JWT token that authenticates all future HTTP request nodes to create and configure the view. Create a token**: Generates a JWT token based on the information provided in the previous node. Set table and field ids**: Stores the generated JWT token and allows you to enter the ids of the tables and fields required to run the automation. Get all records from filter table** Gets all the records from the table you want to filter on. This is the table that has a Link to table field referencing the table with the Date field. Each record from this table will get it’s own view. Some examples: Customers, Employees and Products. Create new calendar view** Calls the API endpoint /api/database/views/table/{table_id} to create a new view. Check the Baserow API documentation for further details. The body of this requests configures the new view by setting among other things a name and the date field Create filter** Calls the API endpoint /api/database/views/{view_id}/filters/ to set a filter on the view so that it only shows the records that are relevant. This filter is based on the Link to table field that is set in earlier steps. Check the Baserow API documentation for further details. Set background color** Calls the API endpoint /api/database/views/{view_id}/decorations/ to set a a color on the background or left side of each item. By default, the color is based on a single select field, but it is also possible to use a condition. Check the Baserow API documentation for further details. Share the view** Calls the API endpoint /api/database/views/{view_id} to update the current view. It updates the ical_public property to true so that an ics link is created. Check the Baserow API documentation for further details. Update the url’s** Updates all the records in the table you want to filter on to fill in the url to the new generated view and the url to the ics file. This can be useful if you want to build an application on top of your database. How to use The Manual Trigger node is provided as an example, but you can replace it with other triggers such as a webhook The included Baserow SOP template works perfectly as a base schema to try out this workflow. Requirements Baserow account (cloud or self-hosted) A Baserow database with a table that has a Date field and a Link to Table field Customising this workflow Change the date field used to generate the calendars (e.g., deadline → appointment date). Adjust the filters to match your context (staff, customer, product, etc.). Configure which fields are shown using the /api/database/view/{view_id}/field-options/ endpoint. Check the Baserow API documentation for further details. Add or remove optional steps such as coloring by status or sharing the ics feed. Extend the workflow to notify staff when a new view has been created for them.
by Marker.io
Marker.io to ServiceNow Integration Automatically create ServiceNow incidents with full technical context when bugs are reported through Marker.io 🎯 What this template does This workflow creates a seamless bridge between Marker.io and ServiceNow, your IT service management platform. Every issue submitted through Marker.io's widget automatically becomes a trackable incident in ServiceNow, complete with technical details and visual context. This ensures your IT team can track, prioritize, and resolve bugs efficiently within their existing ITSM workflow. When a bug is reported, the workflow: Captures the complete Marker.io webhook payload Formats all technical details and metadata Creates a new incident in ServiceNow with the reporter information Includes comprehensive technical context and Marker.io links Preserves screenshots, browser info, and custom data ✨ Benefits Automated ticket creation** - No manual data entry required Complete context** - All bug details transfer automatically Faster triage** - IT teams see issues immediately in ServiceNow Better tracking** - Leverage ServiceNow's incident management capabilities Rich debugging info** - Browser, OS, and screenshot details preserved 💡 Use Cases IT Service Desks**: Streamline bug reporting from end users Development Teams**: Track production issues with full technical context QA Teams**: Convert test findings directly into trackable incidents Support Teams**: Escalate customer-reported bugs to IT with complete details 🔧 How it works N8N Webhook receives Marker.io bug report data JavaScript node formats and extracts relevant information ServiceNow node creates incident with formatted details Incident includes title, description, reporter info, and technical metadata Links preserved to both public and private Marker.io views The result is a fully documented ServiceNow incident that your IT team can immediately action, with all the context needed to reproduce and resolve the issue. 📋 Prerequisites Marker.io account** with webhook capabilities ServiceNow instance** with API access enabled ServiceNow credentials** (username/password or OAuth) Appropriate ServiceNow permissions** to create incidents 🚀 Setup Instructions Import this workflow into your n8n instance Configure the Webhook: Copy the production webhook URL after saving Add to Marker.io: Workspace Settings → Webhooks → Create webhook Select "Issue Created" as the trigger event Set up ServiceNow credentials: In n8n, create new ServiceNow credentials Enter your ServiceNow instance URL Add username and password for a service account Test the connection Customize field mappings (optional): Modify the JavaScript code to map additional fields Adjust priority mappings to match your ServiceNow setup Add custom field mappings as needed Test the integration: Create a test issue in Marker.io Verify the incident appears in ServiceNow Check that all data transfers correctly 📊 Data Captured ServiceNow Incident includes: Short Description**: Issue title from Marker.io Description** containing: 🐛 Issue title and ID 📊 Priority level and issue type 📅 Due date (if set) 📝 Full issue description 🖥️ Browser version and details 💻 Operating system information 🌐 Website URL where issue occurred 🔗 Direct links to Marker.io issue (public and private) 📦 Any custom data fields 📷 Screenshot URL with proper formatting 🔄 Workflow Components Webhook Node**: Receives Marker.io POST requests Code Node**: Processes and formats the data using JavaScript ServiceNow Node**: Creates the incident using ServiceNow API → Read more about Marker.io webhook events 🚨 Troubleshooting Webhook not triggering: Verify webhook URL is correctly copied from n8n to Marker.io Check that "Issue Created" event is selected in Marker.io webhook settings Ensure webhook is set to "Active" status in Marker.io Test with Marker.io's webhook tester feature Check n8n workflow is active and not in testing mode ServiceNow incident not created: Verify ServiceNow credentials are correct and have not expired Check that the service account has permissions to create incidents Ensure ServiceNow instance URL is correct (include https://) Test ServiceNow connection directly in n8n credentials settings Check ServiceNow API rate limits haven't been exceeded Missing or incorrect data: Screenshot URL broken: The workflow already handles URL formatting, but verify Marker.io is generating screenshots Custom data missing: Ensure custom fields exist in Marker.io before sending Due date formatting issues: Check your ServiceNow date format requirements JavaScript errors in Format node: Check webhook payload structure hasn't changed in Marker.io updates Verify all field paths match current Marker.io webhook schema Use n8n's data pinning to debug with actual webhook data Check for undefined values when optional fields are missing Connection issues: ServiceNow timeout: Increase timeout in node settings if needed SSL/Certificate errors: Check ServiceNow instance SSL configuration Network restrictions: Ensure n8n can reach your ServiceNow instance Authentication failures: Regenerate ServiceNow credentials if needed Testing tips: Use n8n's "Execute Workflow" with pinned test data Enable webhook test mode in Marker.io for safe testing Check ServiceNow incident logs for detailed error messages Monitor n8n execution logs for specific failure points
by Ibrahim Emre POLAT
Website & API Health Monitoring System with HTTP Status Validation How it works Performs HTTP health checks on websites and APIs with automatic health status validation Checks HTTP status codes and analyzes JSON responses for common health indicators Returns detailed status information including response times and health status Implements conditional logic to handle different response scenarios Perfect for monitoring dashboards, alerts, and automated health checks Set up steps Deploy the workflow and activate it Get the webhook URL from the trigger node Configure your monitoring system to call the webhook endpoint Send POST requests with target URLs for health monitoring Receive JSON responses with health status, HTTP codes, and timestamps Usage Send POST requests to the webhook URL with target URL parameter Optionally configure timeout and status expectations in request body Receive JSON responses with health status, HTTP codes, and timestamps Perfect for monitoring dashboards, alerts, and automated health checks Use with external monitoring tools like Nagios, Zabbix, or custom dashboards Set up scheduled monitoring calls for continuous health validation Example request: Send POST with {"url": "https://your-site.com", "timeoutMs": 5000} Success response returns: {"ok": true, "statusCode": 200, "healthStatus": "ok"} Failure response returns: {"ok": false, "error": "Health check failed", "statusCode": 503} Benefits Proactive monitoring to identify issues before they impact users Detailed diagnostics with comprehensive health data for troubleshooting Integration ready - works with existing monitoring and alerting systems Customizable timeout settings, expected status codes, and health indicators Scalable solution to monitor multiple services with single workflow endpoint Use Cases E-commerce platforms: Monitor payment APIs, inventory systems, user authentication Microservices: Health validation for distributed service architectures API gateways: Endpoint monitoring with response time validation Database connections: Track connectivity and performance metrics Third-party integrations: Monitor external API dependencies and SLA compliance Target Audience DevOps Engineers implementing production monitoring System Administrators managing server uptime Site Reliability Engineers building monitoring systems Development Teams tracking API health in staging/production IT Support Teams for proactive issue detection
by ilovepdf
Watch Google Drive folder and use iLovePDF Compress Tool to save it in another Google Drive folder This n8n template shows how to upload a file in your Google Drive desired folder, compress it with the iLovePDF tool and move the compressed file to another folder. Good to know This is just an example of using it for you to know how the flow should start to work without issues. After the "combine" step, you can change it according your needs but always maintaining the four main steps of ILoveAPI's request workflow: start, upload, process and download (e.g., an step for sending an email with the compressed file instead of moving it to another folder) Use cases are many: With this template you can monitor a 'to-process' folder for large documents, automatically compress them for better storage efficiency, and move them to an archive folder, all without manual intervention. Then you can explore adapting it to have the functionalities that go best with you! How it works 1. Google Drive Trigger: The workflow starts when a new file is added to a specific Google Drive folder (the source folder). 2. Authentication: The Public Key is sent to the iLoveAPI authentication server to get a time-sensitive Bearer Token. 3. Start Task: A new compress task is initiated with the iLoveAPI server, returning a Task ID and Server Address. 4. Download/Upload: The file is downloaded from Google Drive and then immediately uploaded to the dedicated iLoveAPI Server using the Task ID. 5. Process: The main compression is executed by sending the Task ID, the server_filename, and the original file name to the iLoveAPI /process endpoint. 6. Download Compressed File: Download the compressed file's binary data from the iLoveAPI /download endpoint. 7. Save Compressed File: The compressed PDF is uploaded to the designated Google Drive folder (the destination folder). 8. Move Original File: The original file in the source folder is moved to a separate location (e.g., an 'Archived' folder) to prevent the workflow from processing it again How to use Credentials:** Set up your Google Drive and iLoveAPI credentials in n8n workflow. iLoveAPI Public Key:* Paste your iLoveAPI public key into the *Send your iLoveAPI public key to their server* node's body for authentication, and then in the *Get task from iLoveAPI server** node's body. Source/Destination Folders:* In the *Upload your file to Google Drive* (Trigger) and *Save compressed file in your Google Drive** (Action) nodes, select your desired source and destination folders, respectively. Requirements Google Drive** account/credentials (for file monitoring and storage) -see the docs provided in the node if needed. iLoveAPI** account/API key (for the compression service). An n8n instance (cloud or self-hosted). Need Help? See the iLoveAPI documentation
by Beex
Summary Automatically sync your Beex leads to HubSpot by handling both creation and update events in real time. How It Works Trigger Activation: The workflow is triggered when a lead is created or updated in Beex. Data Transformation: The nested data structure from the Beex Trigger is flattened into a simple JSON format for easier processing. Email Validation: The workflow verifies that the lead contains a valid email address (non-null), as this field serves as the unique identifier in HubSpot. Field Mapping: Configure the fields (via drag and drop) that will be used to create or update a contact in HubSpot. ⚠️ Important: Field names must exactly match the contact property names defined in HubSpot. Event Routing: The workflow routes the action based on the event type received: contact_create or contact_update. Branch Selection: If the event is contact_create, the workflow follows the upper branch; otherwise, it continues through the lower branch. API Request Execution: The corresponding HTTP request is executed POST to create a new contact or PUT to update an existing one both using the same JSON body structure. Setup Instructions Install Beex Nodes: Before importing the template, install the Beex trigger and node using the following package name: n8n-nodes-beex Configure HubSpot Credentials: Set up your HubSpot credentials with: Access Token (typically from a private app) Read/Write permissions for Contacts objects Configure Beex Credentials: For Beex users with platform access (for trial requests, contact frank@beexcc.com): Navigate to Platform Settings → API Key & Callback Copy your API key and paste it into the Beex Trigger node in n8n Set Up Webhook URL: Copy the Webhook URL (Test/Production) from the Beex Trigger Node and paste it into the Callback Integration section in Beex. Save your changes. Requirements HubSpot:* An account with a Private App Token and Read/Write permissions for *Contacts** objects. Beex:** An account with lead generation permissions and a Bearer Token configured in the Trigger Node. Event Configuration:* In the Beex platform's *API Key & Callback** section, enable the following events: "Update general and custom contact data" "Networking" Customization Options Contact Filtering:** Add filters to control which Beex leads should sync to HubSpot. Identifier Configuration:** By default, only leads with a valid email address are processed to ensure accurate matching in HubSpot CRM. You can modify this logic to apply additional restrictions. Field Mapping:** The "Set Fields Update" node is the primary customization point. Here you can map Beex fields to HubSpot properties for both creation and update operations (see Step 4 in How It Works). Field Compatibility:** Ensure that Beex custom fields are compatible with HubSpot's default or custom properties; otherwise, API calls will fail due to schema mismatches.
by panyanyany
Overview This workflow uses the Defapi API with Google's Gemini AI to transform digital photos into authentic Polaroid-style vintage photographs. Upload your photos, provide a creative prompt, and get AI-generated vintage effects with that distinctive instant camera charm. Input: Digital photos + creative prompt + API key Output: Polaroid-style vintage photographs The system provides a simple form interface where users submit their images, prompt, and API key. It automatically processes requests through Defapi API, monitors generation status, and delivers the final vintage photo output. Ideal for photographers, content creators, and social media enthusiasts looking to add vintage charm to their digital photos. Prerequisites A Defapi account and API key: Sign up at Defapi.org An active n8n instance (cloud or self-hosted) with HTTP Request and form submission capabilities Digital photos for transformation (well-lit photos work best) Basic knowledge of AI prompts for vintage photo generation Example prompt: Take a picture with a Polaroid camera. The photo should exhibit rich saturation and vintage color cast, with soft tones, low contrast, and vignetting. The texture features distinct film grain. Do not change the faces. Replace the background behind the two people with a white curtain. Make them close to each other with clear faces and normal skin color. Setup Instructions Obtain API Key: Register at Defapi.org and generate your API key. Store it securely. Configure the Form: Set up the "Upload 2 Images" form trigger with: Image 01 & Image 02 (file uploads), API Key (text field), and Prompt (text field). Test the Workflow: Click "Execute Workflow" in n8n Access the form URL, upload two photos, enter your prompt, and provide your API key The workflow processes images, sends the request to Defapi API, waits 10 seconds, then polls until generation is complete Handle Outputs: The final node displays the generated image URL for download or sharing. Workflow Structure The workflow consists of the following nodes: Upload 2 Images (Form Trigger) - Collects user input: two image files, API key, and prompt Convert to JSON (Code Node) - Converts uploaded images to base64 and formats data Send Image Generation Request to Defapi.org API (HTTP Request) - Submits generation request Wait for Image Processing Completion (Wait Node) - Waits 10 seconds before checking status Obtain the generated status (HTTP Request) - Polls API for completion status Check if Image Generation is Complete (IF Node) - Checks if status equals 'success' Format and Display Image Results (Set Node) - Formats final image URL output Technical Details API Endpoint**: https://api.defapi.org/api/image/gen (POST request) Model Used: google/gemini (Gemini AI**) Status Check Endpoint**: https://api.defapi.org/api/task/query (GET request) Wait Time**: 10 seconds between status checks Image Processing**: Uploaded images are converted to base64 format for API submission Authentication**: Bearer token authentication using the provided API key Specialized For**: Polaroid-style vintage photography and instant camera effects Customization Tips Enhance Prompts**: Include specifics like vintage color cast, film grain texture, vignetting, lighting conditions, and atmosphere to improve AI photo quality. Specify desired saturation levels and contrast adjustments. Photo Quality**: Use well-lit, clearly exposed photos for best results. The AI can simulate flash effects and vintage lighting, but quality input produces better output. Note that generated photos may sometimes be unclear or have incorrect skin tones - try multiple generations to achieve optimal results.
by Satoshi
Overview A cornerstone of your Order Management System, this workflow ensures seamless inventory control through fully automated stock checks, leading to a direct reduction in operational costs. It provides real-time alerts to the responsible personnel, enabling proactive issue detection and resolution to eliminate the financial damages associated with unexpected stock-outs. How it works Order Webhook Receives orders from external sources (e.g., website, form, or app) via API. Check Order Request Checks the validity of the order (e.g., complete product, valid customer details) Check Inventory Retrieve inventory information and compare it with the order request. Notifications Generate a notification to Slack for the manager indicating a successful order or an out-of-stock situation. Logging Log the process details to a Google Sheet for tracking. Set up steps Webhook Create a JSON request with the following format to call the Webhook Url { "id": "ORDER1001", "customer": { "name": "Customer", "email": "customer@example.com" }, "items": [ { "sku": "SKU001", "quantity": 2, "name": "Product A", "price": 5000 }, { "sku": "SKU002", "quantity": 2, "name": "Product C", "price": 10000 } ], "total": 30000 } Define the greater than or less than conditions on the inventory level to enter the corresponding branches. Google Sheet Clone the file to your Google Drive. (WMS Data Demo) Replace your credentials and connect. Access permission must be granted to n8n. Slack Replace your credentials and connect. A channel named "warehouse" needs to be prepared to receive notifications (if using a different name, you must update the Slack node).