by Muh Resky Adiansyah
Meta Ads Insights to Google Sheets (Backfill & Weekly Sync ETL) This workflow provides a structured way to extract Meta Ads performance data and store it in Google Sheets for reporting, dashboarding, or further analysis. It is designed as a lightweight, reliable ETL pipeline focused on stability, clarity, and ease of use, rather than building a full data warehouse solution. What This Workflow Does At a high level, the system: Pulls Meta Ads Insights data via API Supports both historical backfill and automated incremental sync Splits large date ranges into manageable weekly chunks Handles pagination and retries automatically Filters out zero-spend records before storage Stores clean, structured data in Google Sheets Logs skipped or empty responses for traceability Architecture Overview Core Components n8n Meta Ads API Google Sheets Primary Data Outputs Account_A → Campaign-level data (weekly) Account_B → Ad-level data (daily breakdown) Account_A_Log / Account_B_Log → Logging for skipped or empty responses End-to-End Flow A) Dual Entry Points The workflow supports two execution modes: Historical Backfill (Manual Trigger) Used to populate past data. Define start_date and end_date Workflow generates 7-day chunks Each chunk is processed sequentially Incremental Sync (Scheduled Trigger) Runs automatically every 7 days. Dynamically pulls last 7 days No manual input required B) Period Chunking Large date ranges are split into weekly intervals. Prevents API overload Reduces risk of timeouts Ensures consistent data retrieval C) Data Extraction (Per Account) Each period is processed for two separate data streams: Account A Level: campaign Granularity: weekly Account B Level: ad Granularity: daily (time_increment=1) Both using pagination handling & fail-safe response handling D) Response Validation Each API response is validated: Must contain a non-empty data array Invalid or empty responses are redirected to logging This prevents corrupted or empty data from entering the dataset. E) Data Transformation API responses are: Split into individual rows Normalized (numeric fields converted properly) Preserved in full structure (no schema trimming) F) Filtering Logic Only meaningful data is stored: Records where spend != 0 are allowed Zero-spend rows are discarded This keeps the dataset lean and relevant for reporting. G) Data Loading Valid records are appended into Google Sheets: Account A → campaign-level table Account B → ad-level table Each run adds new rows without overwriting previous data. H) Logging & Traceability If a period returns: empty data or API anomaly The workflow logs: status reason account date range execution ID timestamp This creates a lightweight audit trail for debugging and monitoring. Safeguards Built In Pagination handling (auto-follow next page) Fail-safe handling for unstable API responses Execution-level traceability via logs Separation between transformation and filtering logic Google Sheets Schema Account_A / Account_B Includes: date range (start & stop) account, campaign, adset, and ad identifiers performance metrics (spend, impressions, clicks, etc.) action arrays and ranking fields Log Sheets Columns: status reason account since until execution_id timestamp Limitations (By Design) Append-only system (no deduplication) Re-running the same period will create duplicate rows No transactional guarantees (Google Sheets limitation) No concurrency control for parallel executions Not designed for real-time reporting These constraints are intentional to keep the workflow simple and portable. When This Design Works Well Marketing reporting pipelines Looker Studio / dashboard data sources Small to medium datasets Teams without a data warehouse Lightweight ETL needs Setup Requirements Meta Ads API access (ads_read permission) Google Sheets (with required tabs) n8n instance (cloud or self-hosted) Summary This workflow focuses on: clarity over complexity reliability over completeness practical ETL over perfect data modeling It is a solid foundation for building marketing data pipelines without heavy infrastructure.
by Atik
Automate multi-document handling with AI-powered extraction that adapts to any format and organizes it instantly. What this workflow does Monitors Google Drive for new uploads (receipts, resumes, claims, physician orders, blueprints, or any doc type) Automatically downloads and prepares files for analysis Identifies the document type using Google Gemini Parses structured data via the trusted VLM Run node with OCR + layout parsing Stores records in Google Sheets — AI Agent maps values to the correct sheet dynamically Setup Prerequisites: Google Drive & Google Sheets accounts, VLM Run API credentials, n8n instance. Install the verified VLM Run node by searching for VLM Run in the node list, then click Install. Once installed, you can integrate it directly for high-accuracy data extraction. Quick Setup: Configure Google Drive OAuth2 and select a folder for uploads Add VLM Run API credentials Create a Master Reference Google Sheet with the following structure: | Document_Name | Spreadsheet_ID | | ---------------------- | ----------------------------- | | Receipt | your-receipt-sheet-id | | Resume | your-resume-sheet-id | | Physician Order | your-physician-order-sheet-id | | Claims Processing | your-claims-sheet-id | | Construction Blueprint | your-blueprint-sheet-id | The first column holds the document type, and the second column holds the target sheet ID where extracted data should be appended. In the AI Agent node, edit the agent prompt to: Analyze the JSON payload from VLM Run Look up the document type in the Master Reference Sheet If a matching sheet exists → fetch headers, then append data accordingly If headers don’t exist → create them from JSON keys, then insert values If no sheet exists → add the new type to the Master Reference with an empty Spreadsheet ID Test with a sample upload and activate the workflow How to customize this workflow to your needs Extend functionality by: Adjusting the AI Agent prompt to support any new document schema (just update field mappings) Adding support for multi-language OCR or complex layouts in VLM Run Linking Sheets data to BI dashboards or reporting tools Triggering notifications when new entries are stored This workflow leverages the VLM Run node for flexible, precision extraction and the AI Agent for intelligent mapping, creating a powerful system that adapts to any document type with minimal setup changes.
by Amit Mehta
This workflow performs structured data extraction and data mining from a web page by combining the capabilities of Bright Data and Google Gemini. How it Works This workflow focuses on extracting structured data from a web page using Bright Data's Web Unlocker Product. It then uses n8n's AI capabilities, specifically Google Gemini Flash Exp, for information extraction and custom sentiment analysis. The results are sent to webhooks and saved as local files. Use Cases Data Mining**: Automating the process of extracting and analyzing data from websites. Web Scraping**: Gathering structured data for market research, competitive analysis, or content aggregation. Sentiment Analysis**: Performing custom sentiment analysis on unstructured text. Setup Instructions Bright Data Credentials: You need to have an account and a Web Unlocker zone with Bright Data. Update the Header Auth account credentials in the Perform Bright Data Web Request node. Google Gemini Credentials: Provide your Google Gemini(PaLM) Api account credentials for the AI-related nodes. Configure URL and Zone: In the Set URL and Bright Data Zone node, set the web URL you want to scrape and your Bright Data zone. Update Webhook: Update the Webhook Notification URL in the relevant HTTP Request nodes. Workflow Logic Trigger: The workflow is triggered manually. Set Parameters: It sets the target URL and the Bright Data zone. Web Request: The workflow performs a web request to the specified URL using Bright Data's Web Unlocker. The output is formatted as markdown. Data Extraction & Analysis: The markdown content is then processed by multiple AI nodes to: Extract textual data from the markdown. Perform topic analysis with a structured response. Analyze trends by location and category with a structured response. Output: The extracted data and analysis are sent to webhooks and saved as JSON files on disk. Node Descriptions | Node Name | Description | |-----------|-------------| | When clicking 'Test workflow' | A manual trigger node to start the workflow. | | Set URL and Bright Data Zone | A Set node to define the URL to be scraped and the Bright Data zone to be used. | | Perform Bright Data Web Request | An httpRequest node that performs the web request to Bright Data's API to retrieve the content. | | Markdown to Textual Data Extractor | An AI node that uses Google Gemini to convert markdown content into plain text. | | Google Gemini Chat Model | A node representing the Google Gemini model used for the data extraction. | | Topic Extractor with the structured response | An AI node that performs topic analysis and outputs the results in a structured JSON format. | | Trends by location and category with the structured response | An AI node that analyzes and clusters emerging trends by location and category, outputting a structured JSON. | | Initiate a Webhook Notification... | These nodes send the output of the AI analysis to a webhook. | | Create a binary file... | Function nodes that convert the JSON output into binary format for writing to a file. | | Write the topics/trends file to disk | readWriteFile nodes that save the binary data to a local file (d:\topics.json and d:\trends.json). | Customization Tips Change the web URL in the Set URL and Bright Data Zone node to scrape different websites. Modify the AI prompts in the AI nodes to customize the analysis (e.g., change the sentiment analysis criteria). Adjust the output path in the readWriteFile nodes to save the files to a different location. Suggested Sticky Notes for Workflow Note**: "This workflow deals with the structured data extraction by utilizing Bright Data Web Unlocker Product... Please make sure to set the web URL of your interest within the 'Set URL and Bright Data Zone' node and update the Webhook Notification URL". LLM Usages**: "Google Gemini Flash Exp model is being used... Information Extraction is being used for the handling the custom sentiment analysis with the structured response". Required Files 1GOrjyc9mtZCMvCr_Structured_Data_Extract,Data_Mining_with_Bright_Data&_Google_Gemini.json: The main n8n workflow export for this automation. Testing Tips Run the workflow and check the webhook to verify that the extracted data is being sent correctly. Confirm that the d:\topics.json and d:\trends.json files are created on your disk with the expected structured data. Suggested Tags & Categories Engineering AI
by LeeWei
⚙️ Proposal Generator Template (Automates proposal creation from JotForm submissions) 🧑💻 Author: [LeeWei] 🚀 Steps to Connect: JotForm Setup Visit JotForm to generate your API key and connect to the JotForm Trigger node. Update the form field in the JotForm Trigger node with your form ID (default: 251206359432049). Google Drive Setup Go to Google Drive and set up OAuth2 credentials ("Google Drive account") with access to the folder containing your template. Update the fileId field in the Google Drive node with your template file ID (default: 1DSHUhq_DoM80cM7LZ5iZs6UGoFb3ZHsLpU3mZDuQwuQ). Update the name field in the Google Drive node with your desired output file name pattern (default: ={{ $json['Company Name'] }} | Ai Proposal). OpenAI Setup Visit OpenAI and generate your API key. Paste this key into the OpenAI and OpenAI1 nodes under the "OpenAi account 3" credentials. Update the modelId field in the OpenAI1 node if needed (default: gpt-4.1-mini). Google Docs Setup Set up OAuth2 credentials ("Google Docs account") with edit permissions for the generated documents. No fields need editing as the node dynamically updates based on previous outputs. Google Drive2 Setup Ensure the same Google Drive credentials ("Google Drive account") are used. No fields need editing as the node handles PDF conversion automatically. Gmail Setup Go to Gmail and set up OAuth2 credentials ("Gmail account"). No fields need editing as the node dynamically uses the prospect's email from JotForm. How it works The workflow triggers on JotForm submissions, copies a Google Drive template, downloads an audio call link, transcribes it with OpenAI, generates a tailored proposal, updates a Google Docs file, converts it to PDF, and emails it to the prospect. Set up steps Setup time: Approximately 15-20 minutes. Detailed instructions are available in sticky notes within the workflow.
by Nansen
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This workflow listens for an incoming chat message and routes it to an AI Agent. The agent is powered by your preferred Chat Model (such as OpenAI or Anthropic) and extended with the Nansen MCP tool, which enables it to retrieve onchain wallet data, token movements, and address-level insights in real time. The Nansen MCP tool uses HTTP Streamable transport and requires API Key authentication via Header Auth. Read the Documentation: https://docs.nansen.ai/nansen-mcp/overview Set up steps Get your Nansen MCP API key Visit: https://app.nansen.ai/account?tab=api Generate and copy your personal API key. Create a credential for authentication From the homepage, click the dropdown next to "Create Workflow" → "Create Credential". Select Header Auth as the method. Set the Header Name to: NANSEN-API-KEY Paste your API key into the Value field. Save the credential (e.g., Nansen MCP Credentials). Configure the Nansen MCP tool Endpoint: https://mcp.nansen.ai/ra/mcp/ Server Transport: HTTP Streamable Authentication: Header Auth Credential: Select Nansen MCP Credentials Tools to Include: Leave as All (or restrict as needed) Configure the AI Agent Connect your preferred Chat Model (e.g., OpenAI, Anthropic) to the Chat Model input. Connect the Nansen MCP tool to the Tool input. (Optional) Add a Memory block to preserve conversational context. Set up the chat trigger Use the "When chat message received" node to start the flow when a message is received. Test your setup Try sending prompts like: What tokens are being swapped by 0xabc...123? Get recent wallet activity for this address. Show top holders of token XYZ.
by Stefan Joulien
Who this template is for This workflow is designed for teams and businesses that receive invoices in Google Drive and want to automatically extract structured financial data without manual processing. It is ideal for finance teams, operators, and founders who want a simple way to turn invoices into usable data. No accounting software is required, and the workflow works with common invoice formats such as PDFs and images. What this workflow does This workflow monitors a Google Drive folder for newly uploaded invoices. When a file is detected, it uses AI to extract key invoice information such as issuer, date, total amount, taxes, currency, and description. The extracted data is automatically cleaned, structured, and stored in Google Sheets, creating a centralized and searchable invoice database. How it works The workflow starts when a new file is added to a Google Drive folder Each file is processed individually and classified based on its type (PDF or image) The file is then downloaded and analyzed using an AI model optimized for document or image understanding Key invoice fields such as issuer, date, total amount, taxes, currency, and description are extracted and normalized into structured fields The AI output is appended to a Google Sheets table — a short wait step ensures reliable sequential writes when multiple invoices are processed at the same time How to set up Select the Google Drive folder where invoices will be uploaded Connect your OpenAI credentials for document and image analysis Choose the Google Sheets file that will store the extracted invoice data Activate the workflow and upload an invoice to test it Requirements Google Drive account Google Sheets account OpenAI API credentials n8n instance (cloud or self-hosted) How to customize the workflow You can adjust the fields extracted from invoices, add validation rules, connect the data to accounting tools, or extend the workflow with reporting and notification steps.
by Julian Kaiser
Scan Any Workout Plan into the Hevy App with AI This workflow automates the creation of workout routines in the Hevy app by extracting exercise information from an uploaded PDF or Image using AI. What problem does this solve? Tired of manually typing workout plans into the Hevy app? Whether your coach sends them as Google Docs, PDFs, or you have a screenshot of a routine, entering every single exercise, set, and rep is a tedious chore. This workflow ends the madness. It uses AI to instantly scan your workout plan from any file, intelligently extract the exercises, and automatically create the routine in your Hevy account. What used to take 15 minutes of mind-numbing typing now happens in seconds. How it works Trigger: The workflow starts when a PDF file is submitted through an n8n form. Data Extraction: The PDF is converted to a Base64 string and sent to an AI model to extract the raw text of the workout plan. Context Gathering: The workflow fetches a complete list of available exercises directly from the Hevy API. This list is then consolidated. AI Processing: A Google Gemini model analyzes the extracted text, compares it against the official Hevy exercise list, and transforms the raw text into a structured JSON format that matches the Hevy API requirements. Routine Creation: The final structured data is sent to the Hevy API to create the new workout routine in your account. Set up steps Estimated set up time:** 15 minutes. Configure the On form submission trigger or replace it with your preferred trigger (e.g., Webhook). Ensure it's set up to receive a file upload. Add your API credentials for the AI service (in this case, OpenRouter.ai) and the Hevy app. You will need to create 'Hevy API' and OpenRouter API credentials in your n8n instance. In the Structured Data Extraction node, review the prompt and the json schema in the Structured Output Parser. You may need to adjust the prompt to better suit the types of files you are uploading. Activate the workflow. Test it by uploading a sample workout plan document.
by Oussama
This n8n template creates an intelligent Ideation Agent 🤖 that captures your ideas from text and voice notes sent via Telegram. The assistant automatically transcribes your voice memos, analyzes the content with a powerful AI, and organizes it into a structured Google Sheet database. It's the perfect workflow for capturing inspiration whenever it strikes, just by talking or typing 💡. Use Cases: 🗣️ Text-Based Capture: Send any idea as a simple text message to your Telegram bot for instant processing. 🎙️ Voice-to-Idea: Record voice notes on the go. The workflow transcribes them into text and categorizes them automatically. 📂 Automated Organization: The AI agent intelligently structures each idea with a title, description, score, category, and priority level without any manual effort. 📊 Centralized Database: Build a comprehensive and well-organized library of all your ideas in Google Sheets, making it easy to search, review, and act upon them. How it works: Multi-Modal Input: The workflow starts with a Telegram Trigger that listens for incoming text messages and voice notes. Content-Based Routing: A Switch node detects the message type. Text messages are sent directly for processing, while audio files are routed for transcription. Voice Transcription: Voice messages are sent to the ElevenLabs API, which accurately converts the speech into text. Unified Input: Both the original text and the transcribed audio are passed to the AI Agent in a consistent format. AI Analysis & Structuring: An AI Agent, receives the text. It follows a detailed system prompt to analyze the idea and structure it into predefined fields: Idea, Idea Description, Idea Type, Score, Category, Priority, Status, and Complexity. Data Storage: The agent uses the Google Sheets Tool (add_row_tool) to seamlessly add the fully structured idea as a new row in your designated spreadsheet. Instant Confirmation: Once the idea is saved, the workflow sends a confirmation message back to you on Telegram, summarizing the captured idea. Requirements: 🌐 A Telegram Bot API token. 🤖 An AI provider with API access (the template uses Azure OpenAI, but can be adapted). 🗣️ An ElevenLabs API key for voice-to-text transcription. 📝 Google Sheets API credentials to connect to your database. Good to know: ⚠️ Before you start, make sure your Google Sheet has columns that exactly match the fields defined in the Agent's system prompt (e.g., "Idea ", "Idea Description ", "Idea Type", etc.). Note that some have a trailing space in the template. 🎤 The quality of the voice transcription is dependent on the clarity of your recorded audio. ✅ You can completely customize the AI's behavior, including all the categories, types, and scoring logic, by editing the system prompt in the Agent node. Customizing this workflow: ✏️ Modify Categories: To change the available Idea Type, Category/Domain, or Priority Level options, simply edit the list within the Agent node's system prompt. 🔄 Swap LLM: You can easily change the AI model by replacing the Azure OpenAI Chat Model node with another one, such as the standard OpenAI node or a local AI model. 🔗 Change Database: To save ideas to a different platform, just replace the add_row_tool1 (Google Sheets Tool) with a tool for another service like Notion, Airtable, or a database.
by Marcelo Abreu
What this workflow does This workflow takes any website URL, extracts its HTML content, and uses an AI Agent (Claude Opus 4.6) to perform a comprehensive SEO analysis. The AI evaluates the page structure, meta tags, heading hierarchy, link profile, image optimization, and more — then generates a beautifully formatted HTML report. Finally, it converts the report into a PDF using Gotenberg, a free and open-source HTML-to-PDF engine. Workflow steps: Form submission — pass the URL you want to analyze HTML extraction — fetches the full HTML content from the URL AI SEO analysis — Claude Opus 4.6 analyzes the HTML and generates a detailed SEO report in HTML format File conversion — converts the HTML output into a file (index.html) for Gotenberg PDF generation — sends the file to Gotenberg and returns the final PDF Setup Guide Gotenberg — Choose one of 3 options: Option 1 — Demo URL (testing only): Use https://demo.gotenberg.dev as the URL in the HTTP Request node. This is a public instance with rate limits — do not use in production. Option 2 — Docker Compose (self-hosted n8n): Add Gotenberg to the same docker-compose.yml where your n8n service is defined: services: ... your n8n service ... gotenberg: image: gotenberg/gotenberg:8 restart: always Run docker compose up -d to restart your stack. Gotenberg will be available at http://gotenberg:3000 from inside your n8n container. Option 3 — Google Cloud Run (n8n Cloud or no Docker access): Deploy gotenberg/gotenberg:8 as a Cloud Run service via the Google Cloud Console. Set the container port to 3000, memory to 1 GiB, and use the generated URL as your endpoint. 📖 Full Gotenberg docs: gotenberg.dev/docs AI Model This workflow uses Claude Opus 4.6 via the Anthropic API. You can swap it for OpenAI, Google, or Ollama — just replace the Chat Model node. Requirements Anthropic API key (or alternative LLM provider) Gotenberg instance (demo URL included for quick testing) No other external services or paid tools required Feel free to contact me via LinkedIn if you have any questions! 👋🏻
by Cheng Siong Chin
How It Works This workflow automates document authenticity verification by combining AI-based content analysis with immutable blockchain records. It is built for compliance teams, legal departments, supply chain managers, and regulators who need tamper-proof validation and auditable proof. The solution addresses the challenge of detecting forged or altered documents while producing verifiable evidence that meets legal and regulatory standards. Documents are submitted via webhook and processed through PDF content extraction. Anthropic’s Claude analyzes the content for authenticity signals such as inconsistencies, anomalies, and formatting issues, returning structured authenticity scores. Verified documents trigger blockchain record creation and publication to a distributed ledger, with cryptographic proofs shared automatically with carriers and regulators through HTTP APIs. Setup Steps Configure webhook endpoint URL for document submission Add Anthropic API key to Chat Model node for AI Set up blockchain network credentials in HTTP nodes for record preparation Connect Gmail account and specify compliance team email addresses Customize authenticity thresholds Prerequisites Anthropic API key, blockchain network access and credentials Use Cases Supply chain documentation verification for import/export compliance Customization Adjust AI prompts for industry-specific authenticity criteria Benefits Eliminates manual document review time while improving fraud detection accuracy
by Pixcels Themes
Who’s it for This template is ideal for ecommerce founders, dropshippers, Shopify store owners, product managers, and agencies who want to automate product listing creation. It removes manual work by generating titles, descriptions, tags, bullet points, alt text, and SEO metadata directly from a product image and basic input fields. What it does / How it works This workflow starts with a webhook that receives product information along with an uploaded image. The image is uploaded to an online image host so it can be used inside Shopify. At the same time, the image is analyzed by Google Gemini using your provided product name, material type, and details. Gemini returns structured JSON containing: Title Description Tags Bullet points Alt text SEO title SEO description The workflow cleans and parses the AI output, merges it with the uploaded image URL, and constructs a complete Shopify product payload. Finally, it creates a new product in Shopify automatically using the generated content and the provided product variants, vendor, options, and product type. Requirements Google Gemini (PaLM) API credentials Shopify private access token Webhook endpoint for receiving data and files An imgbb (or any image hosting) API key How to set up Connect your Gemini and Shopify credentials. Replace the imgbb API key and configure the hosting node. Provide vendor, product type, variants, and options in the webhook payload. Ensure your source system sends file, product_name, material_type, and extra fields. Run the webhook URL and test with a sample product. How to customize the workflow Change the AI prompt for different product categories Add translation steps for multi-language stores Add price calculation logic Push listings to multiple Shopify stores Save generated metadata into Google Sheets or Notion
by Arlin Perez
🔍 Description: Effortlessly delete unused or inactive workflows from your n8n instance while automatically backing them up as .json files into your Google Drive. Keep your instance clean, fast, and organized — no more clutter slowing you down. This workflow is ideal for users managing large self-hosted n8n setups, or anyone who wants to maintain optimal performance while preserving full workflow backups. ✅ What it does: Accepts a full n8n Workflow URL via a form Retrieves workflow info automatically Converts the workflow’s full JSON definition into a file Uploads that file to Google Drive Deletes the workflow safely using the official n8n API Sends a Telegram notification confirming backup and deletion ⚙️ How it works: 📝 Form – Collects the full workflow URL from the user 🔍 n8n Node (Get Workflow) – Uses the URL to fetch workflow details 📦 Code Node ("JSON to File") – Converts the workflow JSON into a properly formatted .json file with UTF-8 encoding, ready to be uploaded to Google Drive. ☁️ Google Drive Upload – Uploads the .json backup file to your selected Drive folder 🗑️ n8n Node (Delete Workflow) – Deletes the workflow from your instance using its ID 📬 Telegram Notification – Notifies you that the workflow was backed up and deleted, showing title, ID, and date 📋 Requirements: Google Drive connected to your n8n account Telegram Bot connected to n8n An n8n instance with API access (self-hosted or Cloud) Your n8n API Key (Create one in the settings) 🛠️ How to Set Up: ✅ Add your Google Drive credentials ✅ Add your Telegram Bot credentials 🧾 In the “JSON to File” Code node, no additional setup is required — it automatically converts the workflow JSON into a downloadable .json file using the correct encoding and filename format. ☁️ In the Google Drive node: Binary Property: data Folder ID: your target folder in Google Drive 🔑 Create a new credential for the n8n node using: API Key: your personal n8n API key Base URL: your full n8n instance API path (e.g. https://your-n8n-instance.com/api/v1) ⚙️ Use this credential in both the Get Workflow and Delete Workflow n8n nodes 📬 In the Telegram node, use this message template: 🗑️ Workflow "{{ $json.name }}" (ID: {{ $json.id }}) was backed up to Google Drive and deleted from n8n. 📅 {{ $now }} 🔒 Important: This workflow backs up the entire workflow data to Google Drive. Please be careful with the permissions of your Google Drive folder and avoid sharing it publicly, as the backups may contain sensitive information. Ensuring proper security and access control is essential to protect your data. 🚀 Activate the workflow and you're ready to safely back up and remove workflows from your n8n instance