by Frederik Duchi
This n8n template demonstrates how to automatically generate personalized calendar views in Baserow, based on a chosen date field and a filter. Having a personalized view with only information that is relevant to you makes it easy to integrate with external calendar tools like Outlook or Google Calendar. Use cases are many: Task management (deadlines per staff member) Customer management (appointments per customer) Inventory management (delivery dates per supplier) Good to know You only need a Date field (e.g., a task deadline, due date, appointment date) and a Link to table field (e.g., a customer, employee, product) to make this work. The generated calendar views can be shared as .ics files and imported into any external calendar application. Authentication is done through a JWT token constructed from your Baserow username and password. How it works Set Baserow credentials**: Allows you to enter your Baserow credentials (username + password) and the API host path. The host is by default https://api.baserow.io, but you can change this in case you are self-hosting. The information is required to generate a JWT token that authenticates all future HTTP request nodes to create and configure the view. Create a token**: Generates a JWT token based on the information provided in the previous node. Set table and field ids**: Stores the generated JWT token and allows you to enter the ids of the tables and fields required to run the automation. Get all records from filter table** Gets all the records from the table you want to filter on. This is the table that has a Link to table field referencing the table with the Date field. Each record from this table will get it’s own view. Some examples: Customers, Employees and Products. Create new calendar view** Calls the API endpoint /api/database/views/table/{table_id} to create a new view. Check the Baserow API documentation for further details. The body of this requests configures the new view by setting among other things a name and the date field Create filter** Calls the API endpoint /api/database/views/{view_id}/filters/ to set a filter on the view so that it only shows the records that are relevant. This filter is based on the Link to table field that is set in earlier steps. Check the Baserow API documentation for further details. Set background color** Calls the API endpoint /api/database/views/{view_id}/decorations/ to set a a color on the background or left side of each item. By default, the color is based on a single select field, but it is also possible to use a condition. Check the Baserow API documentation for further details. Share the view** Calls the API endpoint /api/database/views/{view_id} to update the current view. It updates the ical_public property to true so that an ics link is created. Check the Baserow API documentation for further details. Update the url’s** Updates all the records in the table you want to filter on to fill in the url to the new generated view and the url to the ics file. This can be useful if you want to build an application on top of your database. How to use The Manual Trigger node is provided as an example, but you can replace it with other triggers such as a webhook The included Baserow SOP template works perfectly as a base schema to try out this workflow. Requirements Baserow account (cloud or self-hosted) A Baserow database with a table that has a Date field and a Link to Table field Customising this workflow Change the date field used to generate the calendars (e.g., deadline → appointment date). Adjust the filters to match your context (staff, customer, product, etc.). Configure which fields are shown using the /api/database/view/{view_id}/field-options/ endpoint. Check the Baserow API documentation for further details. Add or remove optional steps such as coloring by status or sharing the ics feed. Extend the workflow to notify staff when a new view has been created for them.
by Marker.io
Marker.io to ServiceNow Integration Automatically create ServiceNow incidents with full technical context when bugs are reported through Marker.io 🎯 What this template does This workflow creates a seamless bridge between Marker.io and ServiceNow, your IT service management platform. Every issue submitted through Marker.io's widget automatically becomes a trackable incident in ServiceNow, complete with technical details and visual context. This ensures your IT team can track, prioritize, and resolve bugs efficiently within their existing ITSM workflow. When a bug is reported, the workflow: Captures the complete Marker.io webhook payload Formats all technical details and metadata Creates a new incident in ServiceNow with the reporter information Includes comprehensive technical context and Marker.io links Preserves screenshots, browser info, and custom data ✨ Benefits Automated ticket creation** - No manual data entry required Complete context** - All bug details transfer automatically Faster triage** - IT teams see issues immediately in ServiceNow Better tracking** - Leverage ServiceNow's incident management capabilities Rich debugging info** - Browser, OS, and screenshot details preserved 💡 Use Cases IT Service Desks**: Streamline bug reporting from end users Development Teams**: Track production issues with full technical context QA Teams**: Convert test findings directly into trackable incidents Support Teams**: Escalate customer-reported bugs to IT with complete details 🔧 How it works N8N Webhook receives Marker.io bug report data JavaScript node formats and extracts relevant information ServiceNow node creates incident with formatted details Incident includes title, description, reporter info, and technical metadata Links preserved to both public and private Marker.io views The result is a fully documented ServiceNow incident that your IT team can immediately action, with all the context needed to reproduce and resolve the issue. 📋 Prerequisites Marker.io account** with webhook capabilities ServiceNow instance** with API access enabled ServiceNow credentials** (username/password or OAuth) Appropriate ServiceNow permissions** to create incidents 🚀 Setup Instructions Import this workflow into your n8n instance Configure the Webhook: Copy the production webhook URL after saving Add to Marker.io: Workspace Settings → Webhooks → Create webhook Select "Issue Created" as the trigger event Set up ServiceNow credentials: In n8n, create new ServiceNow credentials Enter your ServiceNow instance URL Add username and password for a service account Test the connection Customize field mappings (optional): Modify the JavaScript code to map additional fields Adjust priority mappings to match your ServiceNow setup Add custom field mappings as needed Test the integration: Create a test issue in Marker.io Verify the incident appears in ServiceNow Check that all data transfers correctly 📊 Data Captured ServiceNow Incident includes: Short Description**: Issue title from Marker.io Description** containing: 🐛 Issue title and ID 📊 Priority level and issue type 📅 Due date (if set) 📝 Full issue description 🖥️ Browser version and details 💻 Operating system information 🌐 Website URL where issue occurred 🔗 Direct links to Marker.io issue (public and private) 📦 Any custom data fields 📷 Screenshot URL with proper formatting 🔄 Workflow Components Webhook Node**: Receives Marker.io POST requests Code Node**: Processes and formats the data using JavaScript ServiceNow Node**: Creates the incident using ServiceNow API → Read more about Marker.io webhook events 🚨 Troubleshooting Webhook not triggering: Verify webhook URL is correctly copied from n8n to Marker.io Check that "Issue Created" event is selected in Marker.io webhook settings Ensure webhook is set to "Active" status in Marker.io Test with Marker.io's webhook tester feature Check n8n workflow is active and not in testing mode ServiceNow incident not created: Verify ServiceNow credentials are correct and have not expired Check that the service account has permissions to create incidents Ensure ServiceNow instance URL is correct (include https://) Test ServiceNow connection directly in n8n credentials settings Check ServiceNow API rate limits haven't been exceeded Missing or incorrect data: Screenshot URL broken: The workflow already handles URL formatting, but verify Marker.io is generating screenshots Custom data missing: Ensure custom fields exist in Marker.io before sending Due date formatting issues: Check your ServiceNow date format requirements JavaScript errors in Format node: Check webhook payload structure hasn't changed in Marker.io updates Verify all field paths match current Marker.io webhook schema Use n8n's data pinning to debug with actual webhook data Check for undefined values when optional fields are missing Connection issues: ServiceNow timeout: Increase timeout in node settings if needed SSL/Certificate errors: Check ServiceNow instance SSL configuration Network restrictions: Ensure n8n can reach your ServiceNow instance Authentication failures: Regenerate ServiceNow credentials if needed Testing tips: Use n8n's "Execute Workflow" with pinned test data Enable webhook test mode in Marker.io for safe testing Check ServiceNow incident logs for detailed error messages Monitor n8n execution logs for specific failure points
by Ibrahim Emre POLAT
Website & API Health Monitoring System with HTTP Status Validation How it works Performs HTTP health checks on websites and APIs with automatic health status validation Checks HTTP status codes and analyzes JSON responses for common health indicators Returns detailed status information including response times and health status Implements conditional logic to handle different response scenarios Perfect for monitoring dashboards, alerts, and automated health checks Set up steps Deploy the workflow and activate it Get the webhook URL from the trigger node Configure your monitoring system to call the webhook endpoint Send POST requests with target URLs for health monitoring Receive JSON responses with health status, HTTP codes, and timestamps Usage Send POST requests to the webhook URL with target URL parameter Optionally configure timeout and status expectations in request body Receive JSON responses with health status, HTTP codes, and timestamps Perfect for monitoring dashboards, alerts, and automated health checks Use with external monitoring tools like Nagios, Zabbix, or custom dashboards Set up scheduled monitoring calls for continuous health validation Example request: Send POST with {"url": "https://your-site.com", "timeoutMs": 5000} Success response returns: {"ok": true, "statusCode": 200, "healthStatus": "ok"} Failure response returns: {"ok": false, "error": "Health check failed", "statusCode": 503} Benefits Proactive monitoring to identify issues before they impact users Detailed diagnostics with comprehensive health data for troubleshooting Integration ready - works with existing monitoring and alerting systems Customizable timeout settings, expected status codes, and health indicators Scalable solution to monitor multiple services with single workflow endpoint Use Cases E-commerce platforms: Monitor payment APIs, inventory systems, user authentication Microservices: Health validation for distributed service architectures API gateways: Endpoint monitoring with response time validation Database connections: Track connectivity and performance metrics Third-party integrations: Monitor external API dependencies and SLA compliance Target Audience DevOps Engineers implementing production monitoring System Administrators managing server uptime Site Reliability Engineers building monitoring systems Development Teams tracking API health in staging/production IT Support Teams for proactive issue detection
by ilovepdf
Watch Google Drive folder and use iLovePDF Compress Tool to save it in another Google Drive folder This n8n template shows how to upload a file in your Google Drive desired folder, compress it with the iLovePDF tool and move the compressed file to another folder. Good to know This is just an example of using it for you to know how the flow should start to work without issues. After the "combine" step, you can change it according your needs but always maintaining the four main steps of ILoveAPI's request workflow: start, upload, process and download (e.g., an step for sending an email with the compressed file instead of moving it to another folder) Use cases are many: With this template you can monitor a 'to-process' folder for large documents, automatically compress them for better storage efficiency, and move them to an archive folder, all without manual intervention. Then you can explore adapting it to have the functionalities that go best with you! How it works 1. Google Drive Trigger: The workflow starts when a new file is added to a specific Google Drive folder (the source folder). 2. Authentication: The Public Key is sent to the iLoveAPI authentication server to get a time-sensitive Bearer Token. 3. Start Task: A new compress task is initiated with the iLoveAPI server, returning a Task ID and Server Address. 4. Download/Upload: The file is downloaded from Google Drive and then immediately uploaded to the dedicated iLoveAPI Server using the Task ID. 5. Process: The main compression is executed by sending the Task ID, the server_filename, and the original file name to the iLoveAPI /process endpoint. 6. Download Compressed File: Download the compressed file's binary data from the iLoveAPI /download endpoint. 7. Save Compressed File: The compressed PDF is uploaded to the designated Google Drive folder (the destination folder). 8. Move Original File: The original file in the source folder is moved to a separate location (e.g., an 'Archived' folder) to prevent the workflow from processing it again How to use Credentials:** Set up your Google Drive and iLoveAPI credentials in n8n workflow. iLoveAPI Public Key:* Paste your iLoveAPI public key into the *Send your iLoveAPI public key to their server* node's body for authentication, and then in the *Get task from iLoveAPI server** node's body. Source/Destination Folders:* In the *Upload your file to Google Drive* (Trigger) and *Save compressed file in your Google Drive** (Action) nodes, select your desired source and destination folders, respectively. Requirements Google Drive** account/credentials (for file monitoring and storage) -see the docs provided in the node if needed. iLoveAPI** account/API key (for the compression service). An n8n instance (cloud or self-hosted). Need Help? See the iLoveAPI documentation
by Jitesh Dugar
🎬 WhatsApp Cinematic Video Automation with OpenAI Sora Elevate your digital presence with high-fidelity cinematic video automation. This workflow orchestrates the complex, asynchronous rendering process of OpenAI Sora—transforming static product images or creative concepts into hosted MP4 assets ready for immediate deployment to your storefront or social channels. 🎯 What This Workflow Does This template manages two distinct cinematic video pipelines through a specialized polling and hosting architecture: 🛍️ Pipeline 1: Cinematic E-commerce Walkthroughs Turn a single product photo into a premium commercial asset. This path generates 10–20 second videos featuring dramatic reveals, 360° rotations, and professional studio lighting effects. The resulting video URL can be embedded directly into Shopify, Meesho, or other e-commerce platforms. 📱 Pipeline 2: Dynamic Social Media Remixes Optimized for TikTok, Reels, and Stories, this path "remixes" source images into specific artistic styles (like cyberpunk or anime) with vertical aspect ratios (9:16). It applies cinematic camera movements and dramatic transitions to create viral-ready short-form content. ✨ Key Features Asynchronous Loop Management:** Features a built-in Wait → Poll → IF logic that checks Sora's status every 20 seconds, ensuring the workflow only proceeds once the heavy rendering is complete. Automated Media Hosting:** Uses the uploadtourl node to automatically transfer completed MP4 binaries from OpenAI's temporary storage to your permanent CDN via presigned PUT URLs. Adaptive Prompt Engineering:** Code nodes dynamically inject user parameters (product name, duration, style) into complex Sora prompts to guarantee a premium "commercial aesthetic". Webhook-Driven Scalability:** Centralizes all video generation requests into a single endpoint that routes tasks based on the jobType payload. 💼 Perfect For E-commerce Brands:** Creating thousands of unique product walkthroughs without a video crew. Social Media Content Creators:** Rapidly testing different visual aesthetics for short-form video ads. Marketing Agencies:** Scaling personalized video ad campaigns across different global markets and platforms. App Developers:** Automating the creation of dynamic background videos for mobile or web interfaces. 🔧 What You'll Need Required Integrations OpenAI Sora API Access:** Requires a Tier 4/5 or Enterprise OpenAI platform account with Sora permissions. CDN/Cloud Storage:** A hosting provider (like S3 or GCS) that supports presigned PUT URLs for uploading files via uploadtourl. ⚙️ Configuration Steps API Setup: Create an HTTP Header Auth credential named OpenAI Header Auth with your Bearer token. Domain Mapping: Replace YOUR_CDN_DOMAIN in the response nodes with your actual public asset URL structure. Wait Tuning: Adjust the 20-second wait node if your generation durations frequently exceed Sora's standard render times. Bring your assets to life! Import this template and connect your OpenAI API key to start generating and hosting cinematic videos automatically!
by panyanyany
Overview This workflow uses the Defapi API with Google's Gemini AI to transform digital photos into authentic Polaroid-style vintage photographs. Upload your photos, provide a creative prompt, and get AI-generated vintage effects with that distinctive instant camera charm. Input: Digital photos + creative prompt + API key Output: Polaroid-style vintage photographs The system provides a simple form interface where users submit their images, prompt, and API key. It automatically processes requests through Defapi API, monitors generation status, and delivers the final vintage photo output. Ideal for photographers, content creators, and social media enthusiasts looking to add vintage charm to their digital photos. Prerequisites A Defapi account and API key: Sign up at Defapi.org An active n8n instance (cloud or self-hosted) with HTTP Request and form submission capabilities Digital photos for transformation (well-lit photos work best) Basic knowledge of AI prompts for vintage photo generation Example prompt: Take a picture with a Polaroid camera. The photo should exhibit rich saturation and vintage color cast, with soft tones, low contrast, and vignetting. The texture features distinct film grain. Do not change the faces. Replace the background behind the two people with a white curtain. Make them close to each other with clear faces and normal skin color. Setup Instructions Obtain API Key: Register at Defapi.org and generate your API key. Store it securely. Configure the Form: Set up the "Upload 2 Images" form trigger with: Image 01 & Image 02 (file uploads), API Key (text field), and Prompt (text field). Test the Workflow: Click "Execute Workflow" in n8n Access the form URL, upload two photos, enter your prompt, and provide your API key The workflow processes images, sends the request to Defapi API, waits 10 seconds, then polls until generation is complete Handle Outputs: The final node displays the generated image URL for download or sharing. Workflow Structure The workflow consists of the following nodes: Upload 2 Images (Form Trigger) - Collects user input: two image files, API key, and prompt Convert to JSON (Code Node) - Converts uploaded images to base64 and formats data Send Image Generation Request to Defapi.org API (HTTP Request) - Submits generation request Wait for Image Processing Completion (Wait Node) - Waits 10 seconds before checking status Obtain the generated status (HTTP Request) - Polls API for completion status Check if Image Generation is Complete (IF Node) - Checks if status equals 'success' Format and Display Image Results (Set Node) - Formats final image URL output Technical Details API Endpoint**: https://api.defapi.org/api/image/gen (POST request) Model Used: google/gemini (Gemini AI**) Status Check Endpoint**: https://api.defapi.org/api/task/query (GET request) Wait Time**: 10 seconds between status checks Image Processing**: Uploaded images are converted to base64 format for API submission Authentication**: Bearer token authentication using the provided API key Specialized For**: Polaroid-style vintage photography and instant camera effects Customization Tips Enhance Prompts**: Include specifics like vintage color cast, film grain texture, vignetting, lighting conditions, and atmosphere to improve AI photo quality. Specify desired saturation levels and contrast adjustments. Photo Quality**: Use well-lit, clearly exposed photos for best results. The AI can simulate flash effects and vintage lighting, but quality input produces better output. Note that generated photos may sometimes be unclear or have incorrect skin tones - try multiple generations to achieve optimal results.
by panyanyany
Transform Your Selfies into 3D Figurines with Nano Banana AI Overview This workflow utilizes the Defapi API with Google's Nano Banana AI model to transform your selfies into stunning 3D figurines, action figures, and collectible merchandise designs. Simply upload a selfie photo, provide a creative prompt describing your desired 3D figurine or action figure design, and watch as AI generates professional-quality product visualizations. Input: Your selfie photo + creative prompt + API key Output: AI-generated 3D figurine and action figure designs perfect for collectibles, merchandise, and product visualization Users can interact through a simple form, providing a text prompt describing the desired creative scene, a product image URL, and their API key. The system automatically submits the request to the Defapi API, monitors the generation status in real time, and retrieves the final creative image output. This solution is ideal for marketers, product designers, e-commerce businesses, and content creators who want to quickly generate compelling product advertisements and creative visuals with minimal setup. Perfect for creating 3D figurines and collectible merchandise designs. Prerequisites A Defapi account and API key: Sign up at Defapi.org to obtain your API key. An active n8n instance (cloud or self-hosted) with HTTP Request and form submission capabilities. Basic knowledge of AI prompts for product creative generation to achieve optimal results, especially for 3D figurines and collectible designs. Example prompt: Create a 1/7 scale commercialized 3D figurine of the characters in the picture, in a realistic style, in a real environment. The figurine is placed on a computer desk. The figurine has a round transparent acrylic base, with no text on the base. The content on the computer screen is the Zbrush modeling process of this figurine. Next to the computer screen is a packaging box with rounded corner design and a transparent front window, the figure inside is clearly visible. A product image for creative generation. Important Note: Avoid using dark photos as input, as the generated **3D figurine will also appear dark. Setup Instructions Obtain API Key: Register at Defapi.org and generate your API key. Store it securely—do not share it publicly. Configure the Form: In the "Upload Image" form trigger node, ensure the following fields are set up: Image (file upload), API Key (text field), and Prompt (text field). Test the Workflow: Click "Execute Workflow" in n8n. Access the generated form URL, upload your product image, enter your prompt, and provide your API key. The workflow will process the image through the "Convert to JSON" node, then send the request to the Defapi API. The system will wait 10 seconds and then poll the API status until the image generation is complete. Handle Outputs: The final "Format and Display Image Results" node formats and displays the generated creative image URL for download or embedding. Workflow Structure The workflow consists of the following nodes: Upload Image (Form Trigger) - Collects user input: image file, API key, and prompt Convert to JSON (Code Node) - Converts uploaded image to base64 and formats data Send Image Generation Request to Defapi.org API (HTTP Request) - Submits generation request Wait for Image Processing Completion (Wait Node) - Waits 10 seconds before checking status Obtain the generated status (HTTP Request) - Polls API for completion status Check if Image Generation is Complete (IF Node) - Checks if status equals 'success' Format and Display Image Results (Set Node) - Formats final image URL output Technical Details API Endpoint**: https://api.defapi.org/api/image/gen (POST request) Model Used: google/nano-banana (Nano Banana AI**) Status Check Endpoint**: https://api.defapi.org/api/task/query (GET request) Wait Time**: 10 seconds between status checks Image Processing**: Uploaded images are converted to base64 format for API submission Authentication**: Bearer token authentication using the provided API key Specialized For: **3D figurines, collectible merchandise, and product visualization Customization Tips Enhance Prompts: Include specifics like scene setting, lighting, style (e.g., realistic, artistic, cinematic), product placement, and visual elements to improve AI creative image quality. For **3D figurines, specify scale, materials, and display context. Form Fields*: The form accepts image files (image/), API key (text), and prompt (text) as required fields. Error Handling**: The workflow includes conditional logic to check for successful completion before displaying results. Best Practices for Nano Banana AI: Use detailed descriptions for **figurine designs, specify lighting conditions, and include environmental context for realistic 3D figurine generation. Photo Quality Tips: Use well-lit photos for best results. Avoid dark images as they will make the generated **3D figurine appear dark too.
by 1Shot API
Swap Tokens with Li.Fi The growing popularity of agentic payments has lead to the development of protocols like x402 where agents and humans can pay for internet resources over standard http protocols using stablecoins. This workflow lets you run your own swap relayer where callers can provide an x402-compatible payment header and a desired destination network to instantly receive gas tokens. This setup is trust minimized - user's have the following guarantees: They will be the receiver of the swap. Only the amount of tokens the authorized will be swapped. Setup In order to run this relayer workflow, you will need an account on 1Shot API. You must then import the 1Shot Gas Station contract into your business for any chain you wish to support with your relayer. Next, follow the directions in the worflow sticky notes to update the Payment Configs for the tokens you wish to support swaps for. Lastly, distrubute your webhook api endpoint to your users.
by Jitesh Dugar
🧼 Email Subscription Cleaner A fully automated workflow that cleans, validates, and restructures your subscriber list using Google Sheets and VerifiEmail. Perfect for marketers, SaaS teams, or anyone maintaining an email database. 🚀 What This Workflow Does In one automated run, it: Accepts a cleaning request via Webhook Extracts list settings, preferences, and options Fetches all subscribers from Google Sheets Normalizes emails and formats subscriber fields Performs real-time verification using VerifiEmail Classifies each subscriber as: remove (invalid / disposable / role), keep (valid & safe), tag (special cases) Deletes bad emails directly from the source sheet Stores all valid emails in a clean, curated CleanSubscribers sheet Returns a structured JSON summary to the caller 🔍 Why This Template Is Useful Improves deliverability Removes spam traps, bots, disposable domains Cleans and reorganizes messy lists Reduces bounce rates Builds a healthier mailing list for campaigns No CSV download/upload required — runs directly on Google Sheets 🧠 How It Works (In Simple Steps) Webhook receives batch-clean request Extract Inputs parses settings (listId, priority, options) Fetch Subscribers reads rows from Google Sheets Normalize each subscriber’s fields Validate Email quality (MX check, disposable, provider data) Merge subscriber info + validation results Classify each subscriber into keep/remove/tag Clean Up: remove → deletes the row keep → appends to clean list Respond with a clean JSON summary Fast, simple, reliable — perfect for weekly or on-demand cleanup. 🔧 Setup Required Connect 2 credentials: Google Sheets** (read / delete / append) VerifiEmail** (API key) Update: Sheet name (SubscriberList) Clean list sheet (CleanSubscribers) Optional tag rules in “Classify Email” No other configuration needed. 🏁 Perfect For Newsletters Marketing teams Event lists SaaS mailing lists CRM cleanup Lead verification Removing dead/invalid emails automatically 🏷️ Tags email, cleanup, validation, google-sheets, verifiemail, marketing, automation, list-cleaner, webhook
by System Admin
Extract selected headlines, editor's picks, spotlight etc.. Url : https://www.ft.com/
by Lucas Walter
AI Video Generator for eCommerce Product Catalogs Transform static product images from any online store into engaging animated videos using Google's Veo 3.1 AI. Simply submit a catalog page URL and automatically generate professional product showcase videos where models pose and move to display clothing and fashion items from multiple angles - perfect for elevating product pages with dynamic content that increases conversion rates. How it works Submit any eCommerce catalog page URL** through a simple web form (works with Shopify, WooCommerce, and most online stores) Automatically scrapes product listings** using Firecrawl to extract product titles and high-quality images Batch processes product images** with intelligent iteration through your catalog inventory Generates 8-second animated videos** using Google Veo 3.1 where models wearing the clothing strike multiple poses to showcase fit and style Polls for completion status** and automatically downloads finished videos when ready Organizes assets in Google Drive** with source images and output videos in a structured folder system The workflow creates professional product videos that show garments from different angles with natural model movements, giving shoppers a much better sense of how items look and fit compared to static photos alone. Set up steps Connect API credentials: Firecrawl account for web scraping, Google Gemini/Veo API for video generation, Google Drive for asset storage Create Google Drive output folder where source images and generated videos will be automatically saved Configure folder ID in the workflow to point to your designated Drive location Adjust product limit (optional) to control how many catalog items to process per run Deploy the form webhook to get your submission URL for catalog page processing Time investment: ~15-20 minutes for API setup and configuration, then just submit catalog URLs to automatically generate video content for your entire product line. Requirements: Firecrawl account for web scraping, Google Cloud account with Veo 3.1 API access (currently in preview), Google Drive account. Works best with fashion and apparel catalogs. Note: Video generation takes approximately 10 seconds per product as Veo processes each request. The workflow includes automatic polling to handle the async video generation process.
by Piotr Sobolewski
How it works This workflow is your personal digital assistant for tracking specific information on websites that lack APIs or RSS feeds. It's perfect for keeping an eye on: Niche job postings on specialized forums or company career pages. Product availability or price changes on smaller e-commerce sites. Any specific text or data appearing on a public webpage. It automatically: Visits a specified webpage on a schedule (e.g., hourly, daily). Intelligently extracts specific data points (like job titles, links, product names, or stock status) from the page's HTML using advanced selectors. Notifies you via Telegram when new relevant information is found or if a change occurs. Stop manually refreshing pages and let automation bring the critical updates directly to you! Set up steps Setting up this workflow is more involved than basic automations due to the web scraping aspect, typically taking around 20-40 minutes. You'll need to: Identify the exact URL of the webpage you want to monitor. Learn how to find CSS Selectors or XPath for the specific data elements you want to extract from the webpage (a browser's developer tools are essential here). Authenticate your Telegram account to receive notifications. Optionally, set up an AI service (like OpenAI) if you want to summarize extracted content. All detailed setup instructions and specific configuration guidance, including how to find CSS selectors, are provided within the workflow itself using sticky notes.