by Davide
This workflow automates the process of creating short videos from multiple image references (up to 7 images). It uses "Vidu Reference to Video" model, a video generation API to transform a user-provided prompt and image set into a consistent, AI-generated video. This workflow automates the process of generating AI-powered videos from a set of reference images and then uploading them to TikTok and Youtube. The process is initiated via a user-friendly web form. Advantages ✅ Consistent Video Creation: Uses multiple reference images to maintain subject consistency across frames. ✅ Easy Input: Just a simple form with prompt + image URLs. ✅ Automation: No manual waiting—workflow checks status until video is ready. ✅ SEO Optimization: Automatically generates a catchy, optimized YouTube title using AI. ✅ Multi-Platform Publishing: Uploads directly to Google Drive, YouTube, and TikTok in one flow. ✅ Time Saving: Removes repetitive tasks of video generation, download, and manual uploading. ✅ Scalable: Can run periodically or on-demand, perfect for content creators and marketing teams. ✅ UGC & Social Media Ready: Designed for creating viral short videos optimized for platforms like TikTok and YouTube Shorts. How It Works Form Trigger: A user submits a web form with two key pieces of information: a text Prompt describing the desired video and a list of Reference images (URLs separated by commas or new lines). Data Processing: The workflow processes the submitted image URLs, converting them from a text string into a proper array format for the AI API. AI Video Generation: The processed data (prompt and image array) is sent to the Fal.ai VIDU API endpoint (reference-to-video) to start the video generation job. This node returns a request_id. Status Polling: The workflow enters a loop where it periodically checks the status of the generation job using the request_id. It waits for 60 seconds and then checks if the status is "COMPLETED". If not, it waits and checks again. Result Retrieval: Once the video is ready, the workflow fetches the URL of the generated video file. Title Generation: Simultaneously, the original user prompt is sent to an AI model (GPT-4o-mini via OpenRouter) to generate an optimized, engaging title for the social media post. Upload & Distribution: The video file is downloaded from the generated URL. A copy is saved to a specified Google Drive folder for storage. The video, along with the AI-generated title, is automatically uploaded to YouTube and TikTok via the Upload-Post.com API service. Set Up Steps This workflow requires configuration and API keys from three external services to function correctly. Step 1: Configure Fal.ai for Video Generation Create an account and obtain your API key. In the "Create Video" HTTP node, edit the "Header Auth" credentials. Set the following values: Name: Authorization Value: Key YOUR_FAL_API_KEY (replace YOUR_FAL_API_KEY with your actual key) Step 2: Configure Upload-Post.com for Social Media Uploads Get an API key from your Upload-Post Manage Api Keys dashboard (10 free uploads per month). In both the "HTTP Request" (YouTube) and "Upload on TikTok" nodes, edit their "Header Auth" credentials. Set the following values: Name: Authorization Value: Apikey YOUR_UPLOAD_POST_API_KEY (replace YOUR_UPLOAD_POST_API_KEY with your actual key) Crucial: In the body parameters of both upload nodes, find the user field and replace YOUR_USERNAME with the exact name of the social media profile you configured on Upload-Post.com (e.g., my_youtube_channel). Step 3: Configure Google Drive (Optional Storage) The "Upload Video" node is pre-configured to save the video to a Google Drive folder named "Fal.run". Ensure your Google Drive credentials in n8n are valid and that you have access to this folder, or change the folderId parameter to your desired destination. Step 4: Configure AI for Title Generation The "Generate title" node uses OpenAI to access the gpt-5-mini model.. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by PDF Vector
Overview Healthcare organizations face significant challenges in digitizing and processing medical records while maintaining strict HIPAA compliance. This workflow provides a secure, automated solution for extracting clinical data from various medical documents including discharge summaries, lab reports, clinical notes, prescription records, and scanned medical images (JPG, PNG). What You Can Do Extract clinical data from medical documents while maintaining HIPAA compliance Process handwritten notes and scanned medical images with OCR Automatically identify and protect PHI (Protected Health Information) Generate structured data from various medical document formats Maintain audit trails for regulatory compliance Who It's For Healthcare providers, medical billing companies, clinical research organizations, health information exchanges, and medical practice administrators who need to digitize and extract data from medical records while maintaining HIPAA compliance. The Problem It Solves Manual medical record processing is time-consuming, error-prone, and creates compliance risks. Healthcare organizations struggle to extract structured data from handwritten notes, scanned documents, and various medical forms while protecting PHI. This template automates the extraction process while maintaining the highest security standards for Protected Health Information. Setup Instructions: Configure Google Drive credentials with proper medical record access controls Install the PDF Vector community node from the n8n marketplace Configure PDF Vector API credentials with HIPAA-compliant settings Set up secure database storage with encryption at rest Define PHI handling rules and extraction parameters Configure audit logging for regulatory compliance Set up integration with your Electronic Health Record (EHR) system Key Features: Secure retrieval of medical documents from Google Drive HIPAA-compliant processing with automatic PHI masking OCR support for handwritten notes and scanned medical images Automatic extraction of diagnoses with ICD-10 code validation Medication list processing with dosage and frequency information Lab results extraction with reference ranges and flagging Vital signs capture and normalization Complete audit trail for regulatory compliance Integration-ready format for EHR systems Customization Options: Define institution-specific medical terminology and abbreviations Configure automated alerts for critical lab values or abnormal results Set up custom extraction fields for specialized medical forms Implement medication interaction warnings and contraindication checks Add support for multiple languages and international medical coding systems Configure integration with specific EHR platforms (Epic, Cerner, etc.) Set up automated quality assurance checks and validation rules Implementation Details: The workflow uses advanced AI with medical domain knowledge to understand clinical terminology and extract relevant information while automatically identifying and protecting PHI. It processes various document formats including handwritten prescriptions, lab reports, discharge summaries, and clinical notes. The system maintains strict security protocols with encryption at rest and in transit, ensuring full HIPAA compliance throughout the processing pipeline. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.
by Khairul Muhtadin
The Prompt converter workflow tackles the challenge of turning your natural language video ideas into perfectly formatted JSON prompts tailored for Veo 3 video generation. By leveraging Langchain AI nodes and Google Gemini, this workflow automates and refines your input to help you create high-quality videos faster and with more precision—think of it as your personal video prompt translator that speaks fluent cinematic! 💡 Why Use Prompt Converter? Save time: Automate converting complex video prompts into structured JSON, cutting manual formatting headaches and boosting productivity. Avoid guesswork: Eliminate unclear video prompt details by generating detailed, cinematic descriptions that align perfectly with Veo 3 specs. Improve output quality: Optimize every parameter for Veo 3's video generation model to get realistic and stunning results every time. Gain a creative edge: Turn vague ideas into vivid video concepts with AI-powered enhancement—your video project's secret weapon. ⚡ Perfect For Video creators: Content developers wanting quick, precise video prompt formatting without coding hassles. AI enthusiasts: Developers and hobbyists exploring Langchain and Google Gemini for media generation. Marketing teams: Professionals creating video ads or visuals who need consistent prompt structuring that saves time. 🔧 How It Works ⏱ Trigger: User submits a free text prompt via message or webhook. 📎 Process: The text goes through an AI model that understands and reworks it into detailed JSON parameters tailored for Veo 3. 🤖 Smart Logic: Langchain nodes parse and optimize the prompt with cinematic details, set reasonable defaults, and structure the data precisely. 💌 Output: The refined JSON prompt is sent to Google Gemini for video generation with optimized settings. 🔐 Quick Setup Import the JSON file to your n8n instances Add credentials: Azure OpenAI, Gemini API, OpenRouter API Customize: Adjust prompt templates or default parameters in the Prompt converter node Test: Run your workflow with sample text prompts to see videos come to life 🧩 You'll Need Active n8n instances Azure OpenAI API Gemini API Key OpenRouter API (alternative AI option) 🛠️ Level Up Ideas Add integration with video hosting platforms to auto-upload generated videos 🧠 Nodes Used Prompt Input** (Chat Trigger) OpenAI** (Azure OpenAI GPT model) Alternative** (OpenRouter API) Prompt converter** (Langchain chain LLM for JSON conversion) JSON parser** (structured output extraction) Generate a video** (Google Gemini video generation) Made by: Khaisa Studio Tags: video generation, AI, Langchain, automation, Google Gemini Category: Video Production Need custom work? Contact me
by Daniel
Harness OpenAI's Sora 2 for instant video creation from text or images using fal.ai's API—powered by GPT-5 for refined prompts that ensure cinematic quality. This template processes form submissions, intelligently routes to text-to-video (with mandatory prompt enhancement) or image-to-video modes, and polls for completion before redirecting to your generated clip. 📋 What This Template Does Users submit prompts, aspect ratios (9:16 or 16:9), models (sora-2 or pro), durations (4s, 8s, or 12s), and optional images via a web form. For text-to-video, GPT-5 automatically refines the prompt for optimal Sora 2 results; image mode uses the raw input. It calls one of four fal.ai endpoints (text-to-video, text-to-video/pro, image-to-video, image-to-video/pro), then loops every 60s to check status until the video is ready. Handles dual modes: Text (with GPT-5 enhancement) or image-seeded generation Supports pro upgrades for higher fidelity and longer clips Auto-uploads images to a temp host and polls asynchronously for hands-free results Redirects directly to the final video URL on completion 🔧 Prerequisites n8n instance with HTTP Request and LangChain nodes enabled fal.ai account for Sora 2 API access OpenAI account for GPT-5 prompt refinement 🔑 Required Credentials fal.ai API Setup Sign up at fal.ai and navigate to Dashboard → API Keys Generate a new key with "sora-2" permissions (full access recommended) In n8n, create "Header Auth" credential: Name it "fal.ai", set Header Name to "Authorization", Value to "Key [Your API Key]" OpenAI API Setup Log in at platform.openai.com → API Keys (top-right profile menu) Click "Create new secret key" and copy it (store securely) In n8n, add "OpenAI API" credential: Paste key, select GPT-5 model in the LLM node ⚙️ Configuration Steps Import the workflow JSON into your n8n instance via Settings → Import from File Assign fal.ai and OpenAI credentials to the relevant HTTP Request and LLM nodes Activate the workflow—the form URL auto-generates in the trigger node Test by submitting a sample prompt (e.g., "A cat chasing a laser"); monitor executions for video output Adjust polling wait (60s node) for longer generations if needed 🎯 Use Cases Social Media Teams**: Generate 9:16 vertical Reels from text ideas, like quick product animations enhanced by GPT-5 for professional polish Content Marketers**: Animate uploaded images into 8s promo clips, e.g., turning a static ad graphic into a dynamic story for email campaigns Educators and Trainers**: Create 4s explainer videos from outlines, such as historical reenactments, using pro mode for detailed visuals App Developers**: Embed as a backend service to process user prompts into Sora 2 videos on-demand for creative tools ⚠️ Troubleshooting API quota exceeded**: Check fal.ai dashboard for usage limits; upgrade to pro tier or extend polling waits Prompt refinement fails**: Ensure GPT-5 credential is set and output matches JSON schema—test LLM node independently Image upload errors**: Confirm file is JPG/PNG under 10MB; verify tmpfiles.org endpoint with a manual curl test Endless polling loop**: Add an IF node after 10 checks to timeout; increase wait to 120s for 12s pro generations
by Lucio
Automatically upload your Instagram videos to YouTube with configurable time gaps between each upload, using n8n Tables for deduplication. How it works Fetches recent Instagram posts via the Meta Graph API and filters to only video content (VIDEO/REELS) Checks each video against an n8n Table to skip already-uploaded content Waits a configurable delay between uploads to space out your publishing schedule Processes metadata - extracts title from caption, converts hashtags to YouTube tags Uploads to YouTube with your configured privacy, category, and safety settings Records the upload in the n8n Table to prevent duplicates on future runs Set up steps Time estimate: 10-15 minutes Create an n8n Table with two text fields: postId and youtubeId Connect your Instagram credentials (Meta Developer Bearer Token) Connect your YouTube OAuth2 account Edit the Configuration node to set your preferred upload delay, privacy status, and category Activate the workflow Detailed setup instructions and configuration options are documented in the sticky notes inside the workflow. Required n8n Table | Field | Type | Purpose | |-------|------|---------| | postId | String | Stores the Instagram post ID to prevent re-uploading | | youtubeId | String | Stores the resulting YouTube video ID for reference | How to create: Go to n8n Tables in your n8n instance Create a new table named "Instagram To YouTube" Add two columns: postId (text) and youtubeId (text) Select this table in both the "Check If Already Uploaded" and "Save Upload Record" nodes Configuration Options Edit the Configuration node to customize: { "includeSourceLink": true, // Include Instagram link in description "waitTimeoutSeconds": 900, // Delay between uploads (900 = 15 min) "maxTitleLength": 100, // Maximum YouTube title length "categoryId": "24", // YouTube category (24 = Entertainment) "privacyStatus": "public", // public, private, or unlisted "notifySubscribers": false, // Send notifications to subscribers "defaultLanguage": "en", // Video language code "ageRestricted": false // Mark as 18+ content } Key Settings Explained | Setting | Default | Description | |---------|---------|-------------| | includeSourceLink | true | Set to false if your YouTube account can't add external links (unverified accounts) | | waitTimeoutSeconds | 900 | Gap between uploads in seconds. 900 = 15 minutes, 3600 = 1 hour | | ageRestricted | false | Set to true if your content is for mature audiences (18+) | | notifySubscribers | false | Set to true to notify subscribers on each upload | Requirements n8n version**: 1.0+ Instagram**: Meta Developer account with Graph API access and Bearer Token YouTube**: Google Cloud project with YouTube Data API v3 enabled and OAuth2 credentials Features Filters to VIDEO and REELS only (skips images) Smart title extraction from captions Hashtag to YouTube tags conversion Deduplication via n8n Tables COPPA compliance options (madeForKids settings) Configurable upload delays for drip-feeding content Category IDs Reference | ID | Category | |----|----------| | 1 | Film & Animation | | 10 | Music | | 17 | Sports | | 20 | Gaming | | 22 | People & Blogs | | 23 | Comedy | | 24 | Entertainment | | 25 | News & Politics | | 27 | Education | | 28 | Science & Technology |
by InfyOm Technologies
✅ What problem does this workflow solve? Sending a plain PDF resume doesn’t stand out anymore. This workflow allows candidates to convert their resume and photo into a personalized video resume. Recruiters get a more engaging first impression, while candidates showcase their profile in a modern, impactful way. ⚙️ What does this workflow do? Presents a form for uploading: 📄 Resume (PDF) 🖼 Photo (headshot) Extracts key details from the resume (education, experience, skills). Detects gender from the photo to choose a suitable voice/avatar. Generates a script (spoken resume summary) based on the extracted information. Uploads the photo to HeyGen to create an avatar. Requests video generation on HeyGen: Uses the avatar photo Uses gender-specific settings Uses the generated script as narration Monitors video generation status until completion. Stores the final video URL in a Google Sheet for easy access and tracking. 🔧 Setup Instructions Google Services Connect Google Sheets to n8n to store records with: Candidate name Resume link Video link HeyGen Setup Get an API key from HeyGen. Configure: Avatar upload endpoint (image upload) Video generation endpoint (image ID + script) Form Setup Use the n8n Form Trigger to allow candidates to upload: Resume (PDF) Photo (JPEG/PNG) 🧠 How it Works – Step-by-Step 1. Candidate Submission A candidate fills out a form and uploads: Resume (PDF) Photo 2. Extract Resume Data The resume PDF is processed using OCR/AI to extract: Name Experience Skills Education highlights 3. Gender Detection The uploaded photo is analyzed to detect gender (used for voice/avatar selection). 4. Script Generation Based on the extracted resume info, a concise, natural script is generated automatically. 5. Avatar Upload & Video Creation The photo is uploaded to HeyGen to create a custom avatar. A video generation request is made using: The script The avatar (image ID) A matching voice for the detected gender 6. Video Status Monitoring The workflow polls HeyGen’s API until the video is ready. 7. Save Final Video URL Once complete, the video link is added to a Google Sheet alongside the candidate’s details. 👤 Who can use this? This workflow is ideal for: 🧑🎓 Students and job seekers looking to stand out 🧑💼 Recruitment agencies offering modern resume services 🏢 HR teams wanting engaging candidate submissions 🎥 Portfolio builders for professionals 🚀 Impact Instead of a static PDF, you can now send a dynamic video resume that captures attention, adds personality, and makes a lasting impression.
by browseract
How it works This workflow uses BrowserAct to run an AI-powered browser automation that collects structured product data, including image URLs and related metadata. The workflow then: Parses the BrowserAct output into individual product items Iterates through each product entry Downloads the product image and converts it into Base64 format Sends the image together with a predefined prompt to an AI video generation API Polls the generation status until the video is ready Downloads the generated short video file Uploads both the original product image and the generated video to Google Drive Each product is processed independently, making the workflow suitable for batch-based and scalable automation scenarios. Set up steps Connect your BrowserAct account to enable the browser-based data extraction workflow Connect a Google Drive account where source images and generated videos will be stored Review the input parameters provided by the BrowserAct node, such as target URL, search keyword, or data limit Adjust the product processing limit or batch size if you want to control execution time Run the workflow manually once to verify the output before using it in regular automation Additional explanations and configuration details are provided as sticky notes directly inside the workflow. Workflow Guidance and Showcase https://www.youtube.com/watch?v=XS5vyh-bdz0
by furuidoreandoro
Automated TikTok Repurposing & Video Generation Workflow Who’s it for This workflow is designed for content creators, social media managers, and marketers—specifically those in the career, recruitment, or "job change" (転職/就職) niches. It is ideal for anyone looking to automate the process of finding trending short-form content concepts and converting them into fresh AI-generated videos. How it works / What it does This workflow automates the pipeline from content research to video creation: Scrape Data: It triggers an Apify actor (clockworks/tiktok-scraper) to search and scrape TikTok videos related to "Job Change" (転職) and "Employment" (就職). Store Raw Data: It saves the scraped TikTok metadata (text, stats, author info) into a Google Sheet. AI Analysis & Prompting: An AI Agent (via OpenRouter) analyzes the scraped video content and creates a detailed prompt for a new video (concept, visual cues, aspect ratio). Log Prompts: The generated prompt is saved to a separate tab in the Google Sheet. Video Generation: The prompt is sent to Fal AI (Veo3 model) to generate a new 8-second, vertical (9:16) video with audio. Wait & Retrieve: The workflow waits for the generation to complete, then retrieves the video file. Cloud Storage: Finally, it uploads the generated video file to a specific Google Drive folder. How to set up Credentials: Configure the following credentials in n8n: Apify API: (Currently passed via URL query params in the workflow, recommended to switch to Header Auth). Google Sheets OAuth2: Connect your Google account. OpenRouter API: For the AI Agent. Fal AI (Header Auth): For the video generation API. Google Drive OAuth2: For uploading the final video. Google Sheets: Create a spreadsheet. Note the documentId and update the Google Sheets nodes. Ensure you have the necessary Sheet names (e.g., "シート1" for raw data, "生成済み" for prompts) and columns mapped. Google Drive: Create a destination folder. Update the Upload file node with the correct folderId. Apify: Update the token in the HTTP Request and HTTP Request1 URLs with your own Apify API token. Requirements n8n Version:** 1.x or higher (Workflow uses version 4.3 nodes). Apify Account:** With access to clockworks/tiktok-scraper and sufficient credits. Fal.ai Account:** With credits for the fal-ai/veo3 model. OpenRouter Account:** With credits for the selected LLM. Google Workspace:** Access to Drive and Sheets. How to customize the workflow Change the Niche:* Update the searchQueries JSON body in the first *HTTP Request** node (e.g., change "転職" to "Cooking" or "Fitness"). Adjust AI Logic:* Modify the *AI Agent** system prompt to change the style, tone, or structure of the video prompts it generates. Video Settings:* In the *Fal Submit** node, adjust bodyParameters to change the duration (e.g., 5s), aspect ratio (e.g., 16:9), or disable audio. Scale:* Increase the amount in the *Limit** node to process more than one video per execution.
by gotoHuman
Collaborate with an AI Agent on a joint document, e.g. for creating your content marketing strategy, a sales plan, project status updates, or market analysis. The AI Agent generates markdown text that you can review and edit it in gotoHuman, and only then is the existing Google Doc updated. In this example we use AI to update our company's content strategy for the next quarter. How It Works The AI Agent has access to other documents that provide enough context to write the content strategy. We ask it to generate the text in markdown format. To ensure our strategy document is not changed without our approval, we request a human review using gotoHuman. There the markdown content can be edited and properly previewed. Our workflow resumes once the review is completed. We check if the content was approved and then write the (potentially edited) markdown to our Google Docs file via the Google Drive node. How to set up Most importantly, install the verified gotoHuman node before importing this template! (Just add the node to a blank canvas before importing. Works with n8n cloud and self-hosted) Set up your credentials for gotoHuman, OpenAI, and Google Docs/Drive In gotoHuman, select and create the pre-built review template "Strategy agent" or import the ID: F4sbcPEpyhNKBKbG9C1d Select this template in the gotoHuman node Requirements You need accounts for gotoHuman (human supervision) OpenAI (Doc writing) Google Docs/Drive How to customize Let the workflow run on a schedule, or create and connect a manual trigger in gotoHuman that lets you capture additional human input to feed your agent Provide the agent with more context to write the content strategy Use the gotoHuman response (or a Google Drive file change trigger) to run additional AI agents that can execute on the new strategy
by Madame AI
Generate SEO articles from search queries to WordPress with BrowserAct This workflow automates a programmatic SEO pipeline by turning a list of search queries into fully researched, authoritative blog posts. It scrapes search results (focusing on community insights like Reddit) for real-world data, uses AI to draft comprehensive guides, and publishes them directly to your WordPress site. Target Audience SEO specialists, content marketers, niche site builders, and editorial teams looking to scale content production with high-quality, researched articles. How it works Define Topics: The workflow begins by defining a list of target keywords or questions in a Set node (e.g., "Best automation tools"). Research: It iterates through each query using a Loop node. For each item, BrowserAct scrapes search engine results to gather raw insights, discussions, and market consensus. Draft Content: An AI Agent (acting as a "Senior Technical Editor") analyzes the raw data. It synthesizes the information into a structured, HTML-formatted article with tables, headers, and actionable advice. Publish: The generated content is sent to WordPress to create a new post. Notify: Once the entire batch is processed, a Slack message is sent to notify the team. How to set up Configure Credentials: Connect your BrowserAct, OpenRouter, WordPress, and Slack accounts in n8n. Prepare BrowserAct: Ensure the Programmatic SEO Data Pipeline template is saved in your BrowserAct account. Set Keywords: Open the Set queries node and update the Queries array with the list of topics you want to write about. Configure WordPress: Open the Create a post node and ensure it is connected to your WordPress site. Configure Notification: Open the Send completion notification node and select the Slack channel where you want to receive alerts. Requirements BrowserAct* account with the *Programmatic SEO Data Pipeline** template. OpenRouter** account (or credentials for a specific LLM like GPT-4o or GPT-5). WordPress** account. Slack** account. How to customize the workflow Adjust the Persona: Modify the system prompt in the AI Agent node to change the writing style (e.g., from "Technical Editor" to "Casual Blogger" or "Sales Copywriter"). Add Visuals: Insert an image generation node (like DALL-E or Stable Diffusion) before the WordPress node to create a unique featured image based on the article title. Review Loop: Instead of publishing directly, change the final step to add the draft to Google Docs or Notion for human approval. Need Help? How to Find Your BrowserAct API Key & Workflow ID How to Connect n8n to BrowserAct How to Use & Customize BrowserAct Templates Workflow Guidance and Showcase Video Automated Content Factory: From Reddit Data to SEO Blog Posts with n8n
by Jitesh Dugar
Consolidate and compress project archives for cost-optimized cloud storage 🎯 Description Optimize your cloud storage costs by using this automation to intelligently compress and migrate aging project documentation. This workflow allows you to achieve a professional data lifecycle policy by identifying "stale" files in active storage, applying high-ratio PDF compression, and migrating them to cold storage while maintaining a searchable audit trail. A critical technical feature of this template is the Luxon-based lifecycle logic. By utilizing {{ $now.minus({ months: 6 }).toISODate() }}, the workflow dynamically filters for files that haven't been modified in over half a year. It then generates a unique archive path using {{ $now.toFormat('yyyy/MM_MMM') }}, ensuring your cold storage bucket remains perfectly indexed by year and month without any manual folder creation or renaming. ✨ How to achieve automated storage optimization You can achieve an enterprise-grade archiving system by using the available tools to: Monitor and age-gate — Use the Google Drive node to list project files and a Code node to compare file metadata against a 6-month "hot storage" threshold. Compress and verify — Pass identified files through the HTML to PDF compression engine to reduce file size by up to 80% while maintaining document readability. Migrate to cold storage — Stream the compressed binary directly to AWS S3 (or a dedicated archive folder), using dynamic naming conventions for organized retrieval. Log and notify — Automatically alert the IT team via Slack upon batch completion, providing a report on the specific files migrated and the storage path used. 💡 Key features Intelligent cost reduction** — Automatically targets large, old files for compression, significantly reducing long-term "Cold Storage" billing. Dynamic indexing* — Uses *Luxon** to build a chronological folder structure in the cloud, making multi-year archives easy to navigate. Integrity assurance** — The workflow ensures files meet specific age and type criteria before moving them, preventing accidental archival of active documents. 📦 What you will need Google Drive — Your "Hot" storage where active project files are kept. HTML to PDF Node — Used here for the PDF compression and optimization engine. AWS S3 — Your destination "Cold" storage for long-term archiving. Slack — For automated reporting on storage optimization status. Ready to optimize your cloud storage? Import this template, connect your credentials, and start saving on long-term data costs today.
by mike
This is an example of how you can make Merge by Key work. The “Data 1” and “Data 2” nodes simply provide mock data. You can replace them with your own data sources. Then the “Convert Data” nodes are important. They make sure that the different array items are actually different items in n8n. After that, you have then the merge with the merged data.