by John Pranay Kumar Reddy
๐งฉ Short Summary Proactively alert to service endpoint changes and pod/container issues (Pending, Not Ready, Restart spikes) using Prometheus metrics, formatted and sent to Slack. ๐๏ธ Category DevOps / Monitoring & Observability ๐ท๏ธ Tags kubernetes, prometheus, slack, alerting, sre, ops, kube-state-metrics โ Prerequisites Prometheus scraping kube-state-metrics v2.x. Slack App or Incoming Webhook (channel access). n8n instance with outbound access to Prometheus & Slack. ๐ Required Credentials in n8n Slack: Bot OAuth (chat:write) or Incoming Webhook URL. (Optional) Prometheus Basic Auth (if your Prometheus needs it). ๐ง What This Template Does Detects pods stuck in Pending (scheduling problems like taints/affinity/capacity). Detects containers Not Ready (readiness probe failures). Detects container restart spikes over a sliding window (default 5 minutes). Detects service discovery changes (endpoint count diffs current vs previous snapshot). Sends clean, emoji-enhanced Slack alerts with pod/namespace/service context. Outputs a 5-minute summary block to reduce noise. ๐ฃ Slack Message Style (examples)
by Lorena
This workflow exports a local CSV file to a JSON file.
by vinci-king-01
Amazon Keyboard Product Scraper with AI and Google Sheets Integration ๐ฏ Target Audience E-commerce analysts and researchers Product managers tracking competitor keyboards Data analysts monitoring Amazon keyboard market trends Business owners conducting market research Developers building product comparison tools ๐ Problem Statement Manual monitoring of Amazon keyboard products is time-consuming and error-prone. This template solves the challenge of automatically collecting, structuring, and storing keyboard product data for analysis, enabling data-driven decision making in the competitive keyboard market. ๐ง How it Works This workflow automatically scrapes Amazon keyboard products using AI-powered web scraping and stores them in Google Sheets for comprehensive analysis and tracking. Key Components Scheduled Trigger - Runs the workflow at specified intervals to keep data fresh and up-to-date AI-Powered Scraping - Uses ScrapeGraphAI to intelligently extract product information from Amazon search results with natural language processing Data Processing - Transforms and structures the scraped data for optimal spreadsheet compatibility Google Sheets Integration - Automatically saves product data to your spreadsheet with proper column mapping ๐ Google Sheets Column Specifications The template creates the following columns in your Google Sheets: | Column | Data Type | Description | Example | |--------|-----------|-------------|---------| | title | String | Product name and model | "Logitech MX Keys Advanced Wireless Illuminated Keyboard" | | url | URL | Direct link to Amazon product page | "https://www.amazon.com/dp/B07S92QBCX" | | category | String | Product category classification | "Electronics" | ๐ ๏ธ Setup Instructions Estimated setup time: 10-15 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Google Sheets account with API access Step-by-Step Configuration 1. Install Community Nodes Install ScrapeGraphAI community node npm install n8n-nodes-scrapegraphai 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Grant necessary permissions for spreadsheet access Select or create a target spreadsheet for data storage Configure the sheet name (default: "Sheet1") 4. Customize Amazon Search Parameters Update the websiteUrl parameter in the ScrapeGraphAI node Modify search terms, filters, or categories as needed Adjust the user prompt to extract additional fields if required 5. Configure Schedule Trigger Set your preferred execution frequency (daily, weekly, etc.) Choose appropriate time zones for your business hours Consider Amazon's rate limits when setting frequency 6. Test and Validate Run the workflow manually to verify all connections Check Google Sheets for proper data formatting Validate that all required fields are being captured ๐ Workflow Customization Options Modify Search Criteria Change the Amazon URL to target specific keyboard categories Add price filters, brand filters, or rating requirements Update search terms for different product types Extend Data Collection Modify the user prompt to extract additional fields (price, rating, reviews) Add data processing nodes for advanced analytics Integrate with other data sources for comprehensive market analysis Output Customization Change Google Sheets operation from "append" to "upsert" for deduplication Add data validation and cleaning steps Implement error handling and retry logic ๐ Use Cases Competitive Analysis**: Track competitor keyboard pricing and features Market Research**: Monitor trending keyboard products and categories Inventory Management**: Keep track of available keyboard options Price Monitoring**: Track price changes over time Product Development**: Research market gaps and opportunities ๐จ Important Notes Respect Amazon's terms of service and rate limits Consider implementing delays between requests for large datasets Regularly review and update your scraping parameters Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly ๐ง Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Google Sheets permission errors: Check OAuth2 scope and permissions Data formatting issues: Review the Code node's JavaScript logic Rate limiting: Adjust schedule frequency and implement delays Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Google Sheets API documentation for advanced configurations
by vinci-king-01
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. News Article Scraping and Analysis with AI and Google Sheets Integration ๐ฏ Target Audience News aggregators and content curators Media monitoring professionals Market researchers tracking industry news PR professionals monitoring brand mentions Journalists and content creators Business analysts tracking competitor news Academic researchers collecting news data ๐ Problem Statement Manual news monitoring is time-consuming and often misses important articles. This template solves the challenge of automatically collecting, structuring, and storing news articles from any website for comprehensive analysis and tracking. ๐ง How it Works This workflow automatically scrapes news articles from websites using AI-powered extraction and stores them in Google Sheets for analysis and tracking. Key Components Scheduled Trigger**: Runs automatically at specified intervals to collect fresh content AI-Powered Scraping**: Uses ScrapeGraphAI to intelligently extract article titles, URLs, and categories from any news website Data Processing**: Formats extracted data for optimal spreadsheet compatibility Automated Storage**: Saves all articles to Google Sheets with metadata for easy filtering and analysis ๐ Google Sheets Column Specifications The template creates the following columns in your Google Sheets: | Column | Data Type | Description | Example | |--------|-----------|-------------|---------| | title | String | Article headline and title | "'My friend died right in front of me' - Student describes moment air force jet crashed into school" | | url | URL | Direct link to the article | "https://www.bbc.com/news/articles/cglzw8y5wy5o" | | category | String | Article category or section | "Asia" | ๐ ๏ธ Setup Instructions Estimated setup time: 10-15 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Google Sheets account with API access Step-by-Step Configuration 1. Install Community Nodes Install ScrapeGraphAI community node npm install n8n-nodes-scrapegraphai 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Grant necessary permissions for spreadsheet access Select or create a target spreadsheet for data storage Configure the sheet name (default: "Sheet1") 4. Customize News Source Parameters Update the websiteUrl parameter in the ScrapeGraphAI node Modify the target news website URL as needed Adjust the user prompt to extract additional fields if required Test with a small website first before scaling to larger news sites 5. Configure Schedule Trigger Set your preferred execution frequency (daily, hourly, etc.) Choose appropriate time zones for your business hours Consider the news website's update frequency when setting intervals 6. Test and Validate Run the workflow manually to verify all connections Check Google Sheets for proper data formatting Validate that all required fields are being captured ๐ Workflow Customization Options Modify News Sources Change the website URL to target different news sources Add multiple news websites for comprehensive coverage Implement filters for specific topics or categories Extend Data Collection Modify the user prompt to extract additional fields (author, date, summary) Add sentiment analysis for article content Integrate with other data sources for comprehensive analysis Output Customization Change Google Sheets operation from "append" to "upsert" for deduplication Add data validation and cleaning steps Implement error handling and retry logic ๐ Use Cases Media Monitoring**: Track mentions of your brand, competitors, or industry keywords Content Curation**: Automatically collect articles for newsletters or content aggregation Market Research**: Monitor industry trends and competitor activities News Aggregation**: Build custom news feeds for specific topics or sources Academic Research**: Collect news data for research projects and analysis Crisis Management**: Monitor breaking news and emerging stories ๏ฟฝ๏ฟฝ Important Notes Respect the target website's terms of service and robots.txt Consider implementing delays between requests for large datasets Regularly review and update your scraping parameters Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly ๐ง Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Google Sheets permission errors: Check OAuth2 scope and permissions Data formatting issues: Review the Code node's JavaScript logic Rate limiting: Adjust schedule frequency and implement delays Pro Tips: Keep detailed configuration notes in the sticky notes within the workflow Test with a small website first before scaling to larger news sites Consider adding filters in the Code node to exclude certain article types or categories Monitor execution logs for any issues and adjust parameters accordingly Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Google Sheets API documentation for advanced configurations
by Faisal Khan
What this template does (in 1 sentence) Ranks โbest next actionsโ for an incoming ticket/incident using the VectorPrime Decision Kernel (deterministic + auditable), then returns a clean JSON result you can route to Slack, a backlog tool, or email. Pricing note: Template is free. VectorPrime API requires a user-provided API key (free usage up to a limit; paid plans beyond that). When you should use this Use this workflow when you already have a ticket/incident AND a short list of candidate actions, and you want a consistent, explainable โwhat should we do first?โ ranking. Examples: IT Ops incidents (restart vs rollback vs escalate) Support tickets (refund vs troubleshoot vs escalate) On-call triage (wake someone up vs wait) What you get (outputs) A structured JSON response including: ok (true/false) the original request payload VectorPrime ranking + probabilities (when ok=true) an audit timestamp How it works 1) Webhook receives JSON payload 2) Normalize step validates and builds vp_payload 3) VectorPrime Rank (HTTP Request) calls VectorPrime /v1/kernel/rank 4) Parse step validates the response 5) If OK โ builds optional payloads + writes audit log โ responds success 6) If not OK โ responds with a safe error JSON Setup (IMPORTANT) Step 1 โ Get your webhook URL (required) After importing this template, your webhook URL will be different for each workspace: Open Webhook1 Copy Production URL (real usage) (Test URL is only for editor debugging) Step 2 โ Add your VectorPrime API key (required) In n8n โ Credentials โ create/select Header Auth: Header Name: Authorization Header Value: Bearer YOUR_VECTORPRIME_API_KEY Then open VectorPrime Rank (HTTP) and select that credential. โ Secrets are NOT stored in this template. Each user supplies their own key via Credentials. Input payload this workflow expects This workflow expects: decision_id (string) prompt (string) options (array of objects) options must be a REAL JSON array, like: [ { "id": "a", "label": "Option A" }, { "id": "b", "label": "Option B" } ] If you send fewer than 2 options, the workflow responds with: ok:false error:not_enough_options Example JSON payload (copy/paste) { "decision_id": "REQ-1007", "prompt": "Customer cannot login", "options": [ { "id": "restart", "label": "Restart auth service" }, { "id": "rollback", "label": "Rollback last deploy" } ] }
by Hร n Thiรชn Hแบฃi
This workflow helps SEO professionals and website owners automate the tedious process of monitoring and indexing URLs. It fetches your XML sitemap, filters for recent content, checks the current indexing status via Google Search Console (GSC), and automatically submits non-indexed URLs to the Google Indexing API. By integrating a local submission state (Static Data), it ensures that your API quota is used efficiently by preventing redundant submissions within a 7-day window. Key Features Smart Sitemap Support: Handles both standard sitemaps and Sitemap Indexes (nested sitemaps). Status Check: Uses GSC URL Inspection to verify if a page is truly missing from Google's index before taking action. Quota Optimization: Filters content by lastmod date and tracks submission history to stay within Google's API limits. Rate Limiting: Includes built-in batching and delays to comply with Google's API throughput requirements. Prerequisites Google Search Console API: Enabled in your Google Cloud Console. Google Indexing API: Enabled for instant indexing. Service Account: You need a Service Account JSON key with "Owner" permissions in your GSC Property. Setup steps 1. Configure Google Cloud Console Create Project: Go to the Google Cloud Console and create a new project. Enable APIs: Search for and enable both the Google Search Console API and the Web Search Indexing API. Create Service Account: Navigate to APIs & Services > Credentials > Create Credentials > Service Account. Fill in the details and click Done. Generate Key: Select your service account > Edit service account > Keys > Add Key > Create new key > JSON. Save the downloaded file securely. 2. Set Up Credentials in n8n New Credential: In n8n, go to Create credentials > Google Service Account API. Input Data: Paste the Service Account Email and Private Key from your downloaded JSON file. In JSON file, Service Account Email is client_email and Private Key is private_key. HTTP Request Setup: Enable Set up for use in HTTP Request node. Scopes: Enter exactly: https://www.googleapis.com/auth/indexing https://www.googleapis.com/auth/webmasters.readonly into the Scopes field. GSC Permission: Add the Service Account email to your Google Search Console property as an Owner via Settings > Users and permissions > Add user. 3. Workflow Configuration Configuration Node: Open the Configuration node and enter your Sitemap URL and GSC Property URL. If your property type is URL prefix, the URL must end with a forward slash /. Example: https://hanthienhai.com/ Link Credentials: Update the credentials in both the GSC: Inspect URL Status and GSC: Request Indexing nodes with the service account created in Step 2. 4. Schedule & Activate Set Schedule: Adjust the Schedule Trigger node to your preferred execution frequency. Activate: Toggle the workflow to Active to start the automation. Questions or Need Help? For setup assistance, customization, or workflow support, feel free to contact me at admin@hanthienhai.com
by EoCi - Mr.Eo
๐ฏ What This Does Automatically finds PDF file in Google Drive and extracts information. Use it to pull out clean output. It then formats the output into a clean JSON object. ๐ How It Works 1. Manual Trigger starts the process. 2. ๐Find File: "Google Drive" node finds the PDF file/files in a specified folder and downloads it/them. 3. ๐Extract Raw Text: "Extract From File" node pulls the text content from the retrieval file/files. 4. โ Output Clean Data: "Code" node refines the extracted content and runs custom code for cleaning and final formatting. ๐Setup Guidelines Setup Requirements Google Drive Account**: A Google Drive with an empty folder or folder that contains PDF file/files that you want to process. API Keys**: Gemini, Google Drive. Set up steps Setup time: < 5 minutes Add Credentials in n8n: Ensure your Google Drive OAuth2 and Google Gemini (PaLM) API credentials are created and connected. Go to Credentials > New to add them if you haven't created yet. Configure the Search Node (Get PDF Files/File): Open the node and select your Google Drive credential. In the "Resource" field, choose File/Folder. In "Search Method" field, select "Search File/Folder Name", In "Search Query" type in *.pdf. Add on 2 filters, in "Folder" filter click on dropdown choose "From List" and connect to the created folder on your google drive. In "What to Search" filter, select file. Add on "Options" (optional): Click on "Add option", choose ("ID" and "Name") Define Extraction Rules (Extract Files/File's Data): Select File Type: Open node and click on the dropdown below "Operation" section, choose "Extract From PDF". Next, in "Input Binary Field" section keep as default "data". Clean & Format Data (Optional): Adjust the Get PDF Data Only node to keep only the fields you need and give them friendly names. Modify the Data Parser & Cleaner node if you need to perform custom transformation. Activate and Run: Save and Activate the workflow. Click "Execute Workflow" to run it manually and check the output. Thatโs it! Once configured, this workflow becomes your personal data assistant. Run it anytime you need to extract information quickly and accurately, saving you hours of manual work and ensuring your data is always ready to use.
by Wessel Bulte
TenderNed Public Procurement What This Workflow Does This workflow automates the collection of public procurement data from TenderNed (the official Dutch tender platform). It: Fetches the latest tender publications from the TenderNed API Retrieves detailed information in both XML and JSON formats for each tender Parses and extracts key information like organization names, titles, descriptions, and reference numbers Filters results based on your custom criteria Stores the data in a database for easy querying and analysis Setup Instructions This template comes with sticky notes providing step-by-step instructions in Dutch and various query options you can customize. Prerequisites TenderNed API Access - Register at TenderNed for API credentials Configuration Steps Set up TenderNed credentials: Add HTTP Basic Auth credentials with your TenderNed API username and password Apply these credentials to the three HTTP Request nodes: "Tenderned Publicaties" "Haal XML Details" "Haal JSON Details" Customize filters: Modify the "Filter op ..." node to match your specific requirements Examples: specific organizations, contract values, regions, etc. How It Works Step 1: Trigger The workflow can be triggered either manually for testing or automatically on a daily schedule. Step 2: Fetch Publications Makes an API call to TenderNed to retrieve a list of recent publications (up to 100 per request). Step 3: Process & Split Extracts the tender array from the response and splits it into individual items for processing. Step 4: Fetch Details For each tender, the workflow makes two parallel API calls: XML endpoint** - Retrieves the complete tender documentation in XML format JSON endpoint** - Fetches metadata including reference numbers and keywords Step 5: Parse & Merge Parses the XML data and merges it with the JSON metadata and batch information into a single data structure. Step 6: Extract Fields Maps the raw API data to clean, structured fields including: Publication ID and date Organization name Tender title and description Reference numbers (kenmerk, TED number) Step 7: Filter Applies your custom filter criteria to focus on relevant tenders only. Step 8: Store Inserts the processed data into your database for storage and future analysis. Customization Tips Modify API Parameters In the "Tenderned Publicaties" node, you can adjust: offset: Starting position for pagination size: Number of results per request (max 100) Add query parameters for date ranges, status filters, etc. Add More Fields Extend the "Splits Alle Velden" node to extract additional fields from the XML/JSON data, such as: Contract value estimates Deadline dates CPV codes (procurement classification) Contact information Integrate Notifications Add a Slack, Email, or Discord node after the filter to get notified about new matching tenders. Incremental Updates Modify the workflow to only fetch new tenders by: Storing the last execution timestamp Adding date filters to the API query Only processing publications newer than the last run Troubleshooting No data returned? Verify your TenderNed API credentials are correct Check that you have setup youre filter proper Need help setting this up or interested in a complete tender analysis solution? Get in touch ๐ LinkedIn โ Wessel Bulte
by Beex
Summary Automatically sync your Beex leads to HubSpot by handling both creation and update events in real time. How It Works Trigger Activation: The workflow is triggered when a lead is created or updated in Beex. Data Transformation: The nested data structure from the Beex Trigger is flattened into a simple JSON format for easier processing. Email Validation: The workflow verifies that the lead contains a valid email address (non-null), as this field serves as the unique identifier in HubSpot. Field Mapping: Configure the fields (via drag and drop) that will be used to create or update a contact in HubSpot. โ ๏ธ Important: Field names must exactly match the contact property names defined in HubSpot. Event Routing: The workflow routes the action based on the event type received: contact_create or contact_update. Branch Selection: If the event is contact_create, the workflow follows the upper branch; otherwise, it continues through the lower branch. API Request Execution: The corresponding HTTP request is executed POST to create a new contact or PUT to update an existing one both using the same JSON body structure. Setup Instructions Install Beex Nodes: Before importing the template, install the Beex trigger and node using the following package name: n8n-nodes-beex Configure HubSpot Credentials: Set up your HubSpot credentials with: Access Token (typically from a private app) Read/Write permissions for Contacts objects Configure Beex Credentials: For Beex users with platform access (for trial requests, contact frank@beexcc.com): Navigate to Platform Settings โ API Key & Callback Copy your API key and paste it into the Beex Trigger node in n8n Set Up Webhook URL: Copy the Webhook URL (Test/Production) from the Beex Trigger Node and paste it into the Callback Integration section in Beex. Save your changes. Requirements HubSpot:* An account with a Private App Token and Read/Write permissions for *Contacts** objects. Beex:** An account with lead generation permissions and a Bearer Token configured in the Trigger Node. Event Configuration:* In the Beex platform's *API Key & Callback** section, enable the following events: "Update general and custom contact data" "Networking" Customization Options Contact Filtering:** Add filters to control which Beex leads should sync to HubSpot. Identifier Configuration:** By default, only leads with a valid email address are processed to ensure accurate matching in HubSpot CRM. You can modify this logic to apply additional restrictions. Field Mapping:** The "Set Fields Update" node is the primary customization point. Here you can map Beex fields to HubSpot properties for both creation and update operations (see Step 4 in How It Works). Field Compatibility:** Ensure that Beex custom fields are compatible with HubSpot's default or custom properties; otherwise, API calls will fail due to schema mismatches.
by David Olusola
๐ Crypto + FX Micro-API (Webhook JSON) ๐ Overview Spin up a tiny, serverless-style API from n8n that returns BTC/ETH prices & 24h changes plus USDโEUR and USDโNGN from public, no-key data sources. Ideal for dashboards, low-code apps, or internal tools that just need a simple JSON feed. โ๏ธ How it works Webhook (GET /crypto-fx) โ entrypoint for your client/app. HTTP: ExchangeRate-API โ USD-base FX rates (no API key). HTTP: CoinGecko โ BTC/ETH prices + 24h % change (no API key). Merge โ combines payloads. Code (v2) โ shapes a clean JSON: btc.price, btc.change_24h eth.price, eth.change_24h usd_eur, usd_ngn, ts (ISO timestamp) Respond to Webhook โ returns the JSON with HTTP 200. ๐ Setup Guide 1) Webhook path & URL In the Webhook node, confirm HTTP Method = GET and Path = crypto-fx. Use the Test URL while building; switch to Production URL for live usage. 2) Test the endpoint Curl: curl -s https://<your-n8n-host>/webhook/crypto-fx Browser / fetch(): fetch('https://<your-n8n-host>/webhook/crypto-fx') .then(r => r.json()) .then(data => console.log(data)) 3) Response mapping (already wired) Respond to Webhook โ Response Body is set to {{$json}}. The Code node outputs the exact JSON structure shown above, so no extra mapping is required. ๐ Security (recommended) Add a Webhook Secret (query header check in Code node) or IP allowlist via your reverse proxy. If embedding in public sites, proxy through your backend and apply rate-limit/cache headers there. ๐ Usage ideas Frontend dashboards (Chart.js, ECharts). HomeAssistant / Node-RED info panels. Google Apps Script to store the JSON into Sheets on a timer. ๐ Customization More coins: extend CoinGecko ids= (e.g., bitcoin,ethereum,solana). More FX: read additional codes from fx.rates and add to the payload. Timestamps: convert ts to your preferred timezone on client side. CORS: if calling from browsers, add CORS headers in Respond (options โ headers). ๐งฉ Troubleshooting Empty/partial JSON: run the two HTTP nodes once to verify responses. 429s / rate limiting: add a short Wait node or cache outputs. Wrong URL: ensure youโre using Production URL outside the n8n editor. Security errors: if you add a secret, return 401 when itโs missing/invalid.
by V3 Code Studio
Odoo Customers API โ Export to JSON or Excel provides a simple way to fetch customer records from your Odoo database and get them back either as a structured JSON response or a downloadable Excel (.xlsx) file. โ๏ธ What it does Listens for HTTP GET requests on the endpoint /api/v1/get-customers. Checks for the required name parameter and builds a search filter automatically. Queries the res.partner model to return only customer contacts (is_company = false). Delivers results in JSON by default, or as an Excel (.xlsx) export when response_format=excel is included. ๐ฅ Parameters name โ Required. Used for partial matching on customer names (via Odooโs Like filter). response_format โ Optional. Accepts json (default) or excel. ๐ Examples Excel Example GET /api/v1/get-customers?name=Demo&response_format=excel JSON Example GET /api/v1/get-customers?name=Demo&response_format=json ๐งฉ Default fields display_name, name, email, phone, mobile, parent_id, company_id, country_code, country_id ๐ ๏ธ Setup Open the Odoo node and connect your Odoo API credentials. Adjust the fieldsList in the node if you want to include more data fields (e.g., address, city, or VAT). Trigger the flow from its webhook URL or run it manually inside n8n to test the output. ๐ก Notes Built and tested for n8n v1.108.2+
by Jitesh Dugar
๐จ AI Image Generation & CDN Hosting Automation with Gemini Imagen 3 Streamline your creative production with this high-performance image generation and hosting pipeline. This workflow automates the transition from raw creative prompts to hosted assets, leveraging Gemini Imagen 3 for photorealistic visual generation and an automated Upload to URL sequence to deploy images directly to your CDN. ๐ฏ What This Workflow Does This template is designed to handle two high-value commercial creative tasks via a single Webhook endpoint: ๐ Pipeline 1: Localized Marketing Campaigns Perfect for global brands, this path takes a master marketing image and recreates it with embedded text accurately translated into a target language. The system preserves your original branding, fonts, and visual hierarchy while ensuring localized messaging is sharp and professional. ๐๏ธ Pipeline 2: High-Fidelity Product Mockups Generate photorealistic e-commerce assets instantly. By providing a product type and color scheme, Imagen 3 creates studio-quality mockups with realistic textures and lighting. This is ideal for visualizing new apparel, packaging, or merchandise without a physical photoshoot. โจ Key Features Automated Base64 Processing:** Includes custom logic to decode Gemini's base64 output into n8n binary files (PNG) automatically, removing manual file handling. Direct CDN Deployment:* Uses the built-in *Upload to URL** node to PUT your generated images directly to a presigned URL, making them instantly available via a public link. Intelligent Prompt Engineering:** Dedicated code nodes translate simple input parameters (like jobType or targetLanguage) into detailed, optimized prompts for the highest quality AI output. Scalable Webhook Architecture:** Centralizes your image generation tasks into a single API endpoint that routes traffic based on your specific business needs. ๐ผ Perfect For Digital Agencies:** Rapidly producing localized ad variants for international clients. E-commerce Store Owners:** Visualizing custom products or "on-demand" merchandise. Social Media Managers:** Creating consistent, high-quality visual content for daily posts. Product Designers:** Prototyping colorways and branding on various item types. ๐ง What You'll Need Required Integrations Google AI (Gemini) API Key:** Required for access to the Imagen 3.0 model. CDN/Storage Provider:** Access to a service (like AWS S3 or Google Cloud Storage) that provides presigned PUT URLs for image hosting. โ๏ธ Configuration Steps API Credentials: Set up an HTTP Header Auth credential named Google AI Header Auth using your key from AI Studio. Endpoint Setup: The template is pre-configured to use the imagen-3.0-generate-001 predict endpoint. URL Mapping: After import, update the Upload to URL nodes and response nodes with your specific CDN domain and presigned URL logic. Ready to automate your creative assets? Import this template and connect your Gemini API key to start generating and hosting professional images in seconds!