by Gaetano Castaldo
Web-to-Odoo Lead Funnel (UTM-ready) Create crm.lead records in Odoo from any webform via a secure webhook. The workflow validates required fields, resolves UTMs by name (source, medium, campaign) and writes standard lead fields in Odoo. Clean, portable, and production-ready. Key features ✅ Secure Webhook with Header Auth (x-webhook-token) ✅ Required fields validation (firstname, lastname, email) ✅ UTM lookup by name (utm.source, utm.medium, utm.campaign) ✅ Clean consolidation before create (name, contact_name, email_from, phone, description, type, UTM IDs) ✅ Clear HTTP responses: 200 success / 400 bad request Prerequisites Odoo with Leads enabled (CRM → Settings → Leads) Odoo API Key** for your user (use it as the password) n8n Odoo credentials: URL, DB name, Login, API Key Public URL** for the webhook (ngrok/Cloudflare/reverse proxy). Ensure WEBHOOK_URL / N8N_HOST / N8N_PROTOCOL / N8N_PORT are consistent Header Auth secret** (e.g., x-webhook-token: <your-secret>) How it works Ingest – The Webhook receives a POST at /webhook(-test)/lead-webform with Header Auth. Validate – An IF node checks required fields; if missing → respond with 400 Bad Request. UTM lookup – Three Odoo getAll queries fetch IDs by name: utm.source → source_id utm.medium → medium_id utm.campaign → campaign_id If a record is not found, the corresponding ID remains null. Consolidate – Merge + Code nodes produce a single clean object: { name, contact_name, email_from, phone, description, type: "lead", campaign_id, source_id, medium_id } Create in Odoo – Odoo node (crm.lead → create) writes the lead with standard fields + UTM Many2one IDs. Respond – Success node returns 200 with { status: "ok", lead_id }. Payload (JSON) Required: firstname, lastname, email Optional: phone, notes, source, medium, campaign { "firstname": "John", "lastname": "Doe", "email": "john.doe@example.com", "phone": "+393331234567", "notes": "Wants a demo", "source": "Ads", "medium": "Website", "campaign": "Spring 2025" } Quick test curl -X POST "https://<host>/webhook-test/lead-webform" \ -H "Content-Type: application/json" \ -H "x-webhook-token: <secret>" \ -d '{"firstname":"John","lastname":"Doe","email":"john@ex.com", "phone":"+39333...", "notes":"Demo", "source":"Ads","medium":"Website","campaign":"Spring 2025"}' Notes Recent Odoo versions do not use the mobile field on leads/partners: use phone instead. Keep secrets and credentials out of the template; the user will set their own after import. If you want to auto-create missing UTM records, add an IF after each getAll and a create on utm.*.
by Nexio_2000
This n8n template demonstrates how to export all icons metadata from an Iconfinder account into an organized format with previews, names, iconset names and tags. It generates HTML and CSV outputs. Good to know Iconfinder does not provide a built-in feature to export all icon data at once for contributors, which motivated the creation of this workflow. The workflow exports all iconsets for selected user account and can handle large collections. Preview image URLs are extracted in a consistent size (e.g., 128x128) for easy viewing. Basic icon metadata, including tags and iconset names is included for reference or further automation. How it works The workflow fetches all iconsets from your Iconfinder account. The workflow loops through all your iconsets, handling pagination automatically if an iconset contains more than 100 icons. Each icon is processed to retrieve its metadata, including name, tags, preview image URLs, and the iconset name it belongs to. An HTML file with a preview table and a CSV file with all icon details are generated. How to use Retrieve your User ID – A dedicated node in the workflow is available to fetch your Iconfinder user ID. This ensures the workflow knows which contributor account to access. Setup API access – The workflow includes a setup node where you provide your Iconfinder API key. This node passes the authorization token to all subsequent HTTP request nodes, so you don’t need to manually enter it multiple times. Trigger the workflow – You can start it manually or attach it to a different trigger, such as a webhook or schedule. Export Outputs – The workflow generates an HTML file with preview images and a CSV file containing all metadata. Both files are ready for download or further processing. Requirements Iconfinder account with an API key. Customising this workflow You can adjust the preview size or choose which metadata to include in HTML and CSV outputs. Combine with other workflows to automate asset cataloging.
by Artem Makarov
About this template This template is to demonstrate how to trace the observations per execution ID in Langfuse via ingestion API. Good to know Endpoint: https://cloud.langfuse.com/api/public/ingestion Auth is a Generic Credential Type with a Basic Auth: username = you_public_key, password = your_secret_key. How it works Trigger**: the workflow is executed by another workflow after an AI run finishes (input parameter execution_id). Remove duplicates** Ensures we only process each execution_id once (optional but recommended). Wait to get execution data** Delay (60-80 secs) so totals and per-step metrics are available. Get execution** Fetches workflow metadata and token totals. Code: structure execution data** Normalizes your run into an array of perModelRuns with model, tokens, latency, and text previews. Split Out* → *Loop Over Items** Iterates each run step. Code: prepare JSON for Langfuse** Builds a batch with: trace-create (stable id trace-<executionId>, grouped into session-<workflowId>) generation-create (model, input/output, usage, timings from latency) HTTP Request to Langfuse** Posts the batch. Optional short Wait between sends. Requirements Langfuse Cloud project and API keys n8n instance with the HTTP node Customizing Add span-create and set parentObservationId on the generation to nest under spans. Add scores or feedback later via score-create. Replace sessionId strategy (per workflow, per user, etc.). If some steps don’t produce tokens, compute and set usage yourself before sending.
by Masaki Go
About This Template This workflow automatically fetches the Nikkei 225 closing price every weekday and sends a formatted message to a list of users on LINE. This is perfect for individuals or teams who need to track the market's daily performance without manual data checking. How It Works Schedule Trigger: Runs the workflow automatically every weekday at 4 PM JST (Tokyo time), just after the market closes. Get Data: An HTTP Request node fetches the latest Nikkei 225 data (closing price, change, %) from a data API. Prepare Payload: A Code node formats this data into a user-friendly message and prepares the JSON payload for the LINE Messaging API, including a list of user IDs. Send to LINE: An HTTP Request node sends the formatted message to all specified users via the LINE multicast API endpoint. Who It’s For Anyone who wants to receive daily stock market alerts. Teams that need to share financial data internally. Developers looking for a simple example of an API-to-LINE workflow. Requirements An n8n account. A LINE Official Account & Messaging API access token. An API endpoint to get Nikkei 225 data. (The one in the template is a temporary example). Setup Steps Add LINE Credentials: In the "Send to LINE via HTTP" node, edit the "Authorization" header to include your own LINE Messaging API Bearer Token. Add User IDs: In the "Prepare LINE API Payload" (Code) node, edit the userIds array to add all the LINE User IDs you want to send messages to. Update Data API: The URL in the "Get Nikkei 225 Data" node is a temporary example. Replace it with your own persistent API URL (e.g., from a public provider or your own server). Customization Options Change Schedule:** Edit the "Every Weekday at 4 PM JST" node to run at a different time. (Note: 4 PM JST is 07:00 UTC, which is what the Cron 0 7 * * 1-5 means). Change Message Format:** Edit the message variable inside the "Prepare LINE API Payload" (Code) node to change the text of the LINE message.
by Paul
Agent XML System Message Engineering: Enabling Robust Enterprise Integration and Automation Why Creating System Messages in XML Is Important XML (Extensible Markup Language) engineering is a foundational technique in modern software and system architecture. It enables the structured creation, storage, and exchange of messages—such as system instructions, configuration, or logs—by providing a human-readable, platform-independent, and machine-processable format. Here’s why this matters and how big tech companies leverage it: Importance of XML in Engineering Standardization & Interoperability:** XML provides a consistent way to model and exchange data between different software components, no matter the underlying technology. This enables seamless integration of diverse systems, both internally within companies and externally across partners or clients. Traceability & Accountability:** By capturing not only the data but also its context (e.g., source, format, transformation steps), XML enables engineers to trace logic, troubleshoot issues, and ensure regulatory compliance. This is particularly crucial in sectors like finance, healthcare, and engineering where audit trails and documentation are mandatory. Configuration & Flexibility:** XML files are widely used for application settings. The clear hierarchical structure allows easy updates, quick testing of setups, and management of complex configurations—without deep developer intervention. Reusability & Automation:** Automating the creation of system messages or logs in XML allows organizations to reuse and adapt those messages for various systems or processes, reducing manual effort, errors, and improving scalability. How Big Tech Companies Use XML System Integration and Messaging:** Large enterprises including Amazon, Google, Microsoft, and SAP use XML for encoding, transporting, and processing data between distributed systems via web services (such as SOAP and REST APIs), often at web scale. Business Process Automation:** In supply chain management, e-commerce, and transactional processing, XML enables rapid, secure, and traceable information exchange—helping automate operations that cross organizational and geographical borders. Content Management & Transformation:** Companies use XML to manage and deliver dynamic content—such as translations, different document layouts, or multi-channel publishing—by separating data from its presentation and enabling real-time transformations through XSLT or similar technologies. Data Storage, Validation, and Big Data:** XML’s schema definitions (XSD) and well-defined structure are used by enterprises for validating and storing data models, supporting compatibility and quality across complex systems, including big data applications. Why XML System Message Engineering Remains Relevant > “XML is currently the most sophisticated format for distributed data — the World Wide Web can be seen as one huge XML database... Rapid adoption by industry [reinforces] that XML is no longer optional.” It brings consistency, scalability, and reliability to how software communicates, making development faster and systems more robust. Enterprises continue to use XML alongside newer formats (like JSON) wherever rich validation, structured messaging, and backward compatibility with legacy systems are required. In summary: XML engineering empowers organizations, especially tech giants, to build, scale, and manage complex digital ecosystems by facilitating integration, automation, traceability, and standardization of data and messages across their platforms, operations, and partners.
by Shahrear
Automatically process Construction Blueprints into structured Excel entries with VLM extraction >Disclaimer: This template uses community nodes, including the VLM Run node. It requires a self-hosted n8n instance and will not run on n8n Cloud. What this workflow does Monitors OneDrive for new blueprints in a target folder Downloads the file inside n8n for processing Sends the file to VLM Run for VLM analysis Fetches details from the construction.blueprint domain as JSON Appends normalized fields to an Excel sheet as a new row Setup Prerequisites: Microsoft account, VLM Run API credentials, OneDrive access, Excel Online, n8n. Install the verified VLM Run node by searching for VLM Run in the node list, then click Install. Once installed, you can start using it in your workflows. Quick Setup: Create the OneDrive folder you want to watch and copy its Folder ID OneDrive web: open the folder in your browser, then copy the value of the id= URL parameter. It is URL-encoded. Alternative in n8n: use a OneDrive node with the operation set to List to browse folders and copy the id field from the response. Create an Excel sheet with headers like: timestamp, file_name, file_id, mime_type, size_bytes, uploader_email, document_type, document_number, issue_date, author_name, drawing_title_numbers, revision_history, job_name, address, drawing_number, revision, drawn_by, checked_by, scale_information, agency_name, document_title, blueprint_id, blueprint_status, blueprint_owner, blueprint_url Configure OneDrive OAuth2 for the trigger and download nodes Use Microsoft OAuth2 in n8n. Approve requested scopes for file access and offline access when prompted. Test the connection by listing a known folder. Add VLM Run API credentials from https://app.vlm.run/dashboard to the VLM Run node Configure Excel Online OAuth2 and set Spreadsheet ID and target sheet tab Test by uploading a sample file to the watched OneDrive folder and activate Perfect for Converting uploaded construction blueprint documents into clean text Organizing extracted blueprint details into structured sheets Quickly accessing key attributes from technical files Centralized archive of blueprint-to-text conversions Key Benefits End to end automation** from OneDrive upload to structured Excel entry Accurate text extraction** of construction blueprint documents Organized attribute mapping** for consistent records Searchable archives** directly in Excel Hands-free processing** after setup How to customize Extend by adding: Version control that links revisions of the same drawing and highlights superseded rows Confidence scores per extracted field with threshold-based routing to manual or AI review Auto-generate a human-readable summary column for quick scanning of blueprint details Split large multi-sheet PDFs into per-drawing rows with individual attributes Cross-system sync to Procore, Autodesk Construction Cloud, or BIM 360 for project-wide visibility
by John Pranay Kumar Reddy
🧩 Short Summary Proactively alert to service endpoint changes and pod/container issues (Pending, Not Ready, Restart spikes) using Prometheus metrics, formatted and sent to Slack. 🗂️ Category DevOps / Monitoring & Observability 🏷️ Tags kubernetes, prometheus, slack, alerting, sre, ops, kube-state-metrics ✅ Prerequisites Prometheus scraping kube-state-metrics v2.x. Slack App or Incoming Webhook (channel access). n8n instance with outbound access to Prometheus & Slack. 🔑 Required Credentials in n8n Slack: Bot OAuth (chat:write) or Incoming Webhook URL. (Optional) Prometheus Basic Auth (if your Prometheus needs it). 🧠 What This Template Does Detects pods stuck in Pending (scheduling problems like taints/affinity/capacity). Detects containers Not Ready (readiness probe failures). Detects container restart spikes over a sliding window (default 5 minutes). Detects service discovery changes (endpoint count diffs current vs previous snapshot). Sends clean, emoji-enhanced Slack alerts with pod/namespace/service context. Outputs a 5-minute summary block to reduce noise. 📣 Slack Message Style (examples)
by Samuel Heredia
Data Extraction from MongoDB Overview This workflow exposes a public HTTP GET endpoint to read all documents from a MongoDB collection, with: Strict validation of the collection name Error handling with proper 4xx codes Response formatting (e.g., _id → id) and a consistent 2XX JSON envelope Workflow Steps Webhook Trigger: *A public GET endpoint receives requests with the collection name as a parameter*. The workflow begins with a webhook that listens for incoming HTTP GET requests. The endpoint follows this pattern: https://{{your-n8n-instance}}/webhook-test/{{uuid>}}/:nameCollection The :nameCollection parameter is passed directly in the URL and specifies the MongoDB collection to be queried. Example: https://yourdomain.com/webhook-test/abcd1234/orders would attempt to fetch all documents from the orders collection. Validation: *The collection name is checked against a set of rules to prevent invalid or unsafe queries*. Before querying the database, the collection name undergoes validation using a regular expression: ^(?!system\.)[a-zA-Z0-9._]{1,120}$ Purpose of validation: Blocks access to MongoDB’s reserved system.* collections. Prevents injection attacks by ensuring only alphanumeric characters, underscores, and dots are allowed. Enforces MongoDB’s length restrictions (max 120 characters). This step ensures the workflow cannot be exploited with malicious input. Conditional Check: *If the validation fails, the workflow stops and returns an error message. If it succeeds, it continues.* The workflow checks if the collection name passes validation. If valid ✅: proceeds to query MongoDB. If invalid ❌: immediately returns a structured HTTP 400 response, adhering to RESTful standards: { "code": 400, "message": "{{ $json.message }}" } MongoDB Query: *The workflow connects to MongoDB and retrieves all documents from the specified collection.* To use the MongoDB node, a proper database connection must be configured in n8n. This is done through MongoDB Credentials in the node settings: Create MongoDB Credentials in n8n Go to n8n → Credentials → New. Select MongoDB and Fill in the following fields: Host: The MongoDB server hostname or IP (e.g., cluster0.mongodb.net). Port: Default is 27017 for local deployments. Database: Name of the database (e.g., myDatabase). User: MongoDB username with read permissions. Password: Corresponding password. Connection Type: Standard for most cases, or Connection String if using a full URI. Replica Set / SRV Record: Enable if using MongoDB Atlas or a replica cluster. Using a Connection String (recommended for MongoDB Atlas) Example URI: mongodb+srv://<username>:<password>@cluster0.mongodb.net/myDatabase?retryWrites=true&w=majority Paste this into the Connection String field when selecting "Connection String" as the type. Verify the Connection After saving, test the credentials to confirm n8n can connect successfully to your MongoDB instance. Configure the MongoDB Node in the Workflow Operation: Find (to fetch documents). Collection: Dynamic value passed from the workflow (e.g., {{$json["nameCollection"]}}). Query: Leave empty to fetch all documents, or define filters if needed. Result: The MongoDB node will retrieve all documents from the specified collection and pass the dataset as JSON to the next node for processing. Data Formatting: *The retrieved documents are processed to adjust field names.* By default, MongoDB returns its unique identifier as _id. To align with common API conventions, this step renames _id → id. This small transformation simplifies downstream usage, making responses more intuitive for client applications. Response: *The cleaned dataset is returned as a structured JSON response to the original request.* The processed dataset is returned as the response to the original HTTP request. Clients receive a clean JSON payload with the expected format and renamed identifiers. Example response: [ { "id": "64f13c1e2f1a5e34d9b3e7f0", "name": "John Doe", "email": "john@example.com" }, { "id": "64f13c1e2f1a5e34d9b3e7f1", "name": "Jane Smith", "email": "jane@example.com" } ] Workflow Summary Webhook (GET) → Code (Validation) → IF (Validation Check) → MongoDB (Query) → Code (Transform IDs) → Respond to Webhook Key Benefits ✅ Security-first design: prevents unauthorized access or injection attacks. ✅ Standards compliance: uses HTTP status codes (400) for invalid requests. ✅ Clean API response: transforms MongoDB’s native _id into a more user-friendly id. ✅ Scalability: ready for integration with any frontend, third-party service, or analytics pipeline.
by vinci-king-01
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This automated workflow monitors stock prices by scraping real-time data from Yahoo Finance. It uses a scheduled trigger to run at specified intervals, extracts key stock metrics using AI-powered extraction, formats the data through a custom code node, and automatically saves the structured information to Google Sheets for tracking and analysis. Key Steps: Scheduled Trigger**: Runs automatically at specified intervals to collect fresh stock data AI-Powered Scraping**: Uses ScrapeGraphAI to intelligently extract stock information (symbol, current price, price change, change percentage, volume, and market cap) from Yahoo Finance Data Processing**: Formats extracted data through a custom Code node for optimal spreadsheet compatibility and handles both single and multiple stock formats Automated Storage**: Saves all stock data to Google Sheets with proper column mapping for easy filtering, analysis, and historical tracking Set up steps Setup Time: 5-10 minutes Configure Credentials: Set up your ScrapeGraphAI API key and Google Sheets OAuth2 credentials Customize Target: Update the website URL in the ScrapeGraphAI node to your desired stock symbol (currently set to AAPL) Configure Schedule: Set your preferred trigger frequency (daily, hourly, etc.) for stock price monitoring Map Spreadsheet: Connect to your Google Sheets document and configure column mapping for the stock data fields Pro Tips: Keep detailed configuration notes in the sticky notes within the workflow Test with a single stock first before scaling to multiple stocks Consider modifying the Code node to handle different stock symbols or add additional data fields Perfect for building a historical database of stock performance over time Can be extended to track multiple stocks by modifying the ScrapeGraphAI prompt
by vinci-king-01
AI-Powered Stock Tracker with Yahoo Finance & Google Sheets ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This automated workflow monitors stock prices by scraping real-time data from Yahoo Finance. It uses a scheduled trigger to run at specified intervals, extracts key stock metrics using AI-powered extraction, formats the data through a custom code node, and automatically saves the structured information to Google Sheets for tracking and analysis. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or cloud) ScrapeGraphAI community node installed Google Sheets API access Yahoo Finance access (no API key required) Required Credentials ScrapeGraphAI API Key** - For web scraping capabilities Google Sheets OAuth2** - For spreadsheet integration Google Sheets Setup Create a Google Sheets document with the following column structure: | Column A | Column B | Column C | Column D | Column E | Column F | Column G | |----------|----------|----------|----------|----------|----------|----------| | symbol | current_price | change | change_percent | volume | market_cap | timestamp | | AAPL | 225.50 | +2.15 | +0.96% | 45,234,567 | 3.45T | 2024-01-15 14:30:00 | How it works This automated workflow monitors stock prices by scraping real-time data from Yahoo Finance. It uses a scheduled trigger to run at specified intervals, extracts key stock metrics using AI-powered extraction, formats the data through a custom code node, and automatically saves the structured information to Google Sheets for tracking and analysis. Key Steps: Scheduled Trigger**: Runs automatically at specified intervals to collect fresh stock data AI-Powered Scraping**: Uses ScrapeGraphAI to intelligently extract stock information (symbol, current price, price change, change percentage, volume, and market cap) from Yahoo Finance Data Processing**: Formats extracted data through a custom Code node for optimal spreadsheet compatibility and handles both single and multiple stock formats Automated Storage**: Saves all stock data to Google Sheets with proper column mapping for easy filtering, analysis, and historical tracking Set up steps Setup Time: 5-10 minutes Configure Credentials: Set up your ScrapeGraphAI API key and Google Sheets OAuth2 credentials Customize Target: Update the website URL in the ScrapeGraphAI node to your desired stock symbol (currently set to AAPL) Configure Schedule: Set your preferred trigger frequency (daily, hourly, etc.) for stock price monitoring Map Spreadsheet: Connect to your Google Sheets document and configure column mapping for the stock data fields Node Descriptions Core Workflow Nodes: Schedule Trigger** - Initiates the workflow at specified intervals Yahoo Finance Stock Scraper** - Extracts real-time stock data using ScrapeGraphAI Stock Data Formatter** - Processes and formats extracted data for spreadsheet compatibility Google Sheets Stock Logger** - Saves formatted stock data to your spreadsheet Data Flow: Trigger → Scraper → Formatter → Logger Customization Examples Track Multiple Stocks // In the ScrapeGraphAI node, modify the URL to track different stocks: const stockSymbols = ['AAPL', 'GOOGL', 'MSFT', 'TSLA']; const baseUrl = 'https://finance.yahoo.com/quote/'; Add Additional Data Fields // In the Code node, extend the data structure: const extendedData = { ...stockData, pe_ratio: extractedData.pe_ratio, dividend_yield: extractedData.dividend_yield, day_range: extractedData.day_range }; Custom Scheduling // Modify the Schedule Trigger for different frequencies: // Daily at 9:30 AM (market open): "0 30 9 * * *" // Every 15 minutes during market hours: "0 */15 9-16 * * 1-5" // Weekly on Monday: "0 0 9 * * 1" Data Output Format The workflow outputs structured JSON data with the following fields: { "symbol": "AAPL", "current_price": "225.50", "change": "+2.15", "change_percent": "+0.96%", "volume": "45,234,567", "market_cap": "3.45T", "timestamp": "2024-01-15T14:30:00Z" } Troubleshooting Common Issues ScrapeGraphAI Rate Limits - Implement delays between requests Yahoo Finance Structure Changes - Update scraping prompts Google Sheets Permission Errors - Verify OAuth2 credentials and document permissions Performance Tips Use appropriate trigger intervals (avoid excessive scraping) Implement error handling for network issues Consider data validation before saving to sheets Pro Tips: Keep detailed configuration notes in the sticky notes within the workflow Test with a single stock first before scaling to multiple stocks Consider modifying the Code node to handle different stock symbols or add additional data fields Perfect for building a historical database of stock performance over time Can be extended to track multiple stocks by modifying the ScrapeGraphAI prompt
by Cai Yongji
GitHub Trending to Supabase (Daily, Weekly, Monthly) Who is this for? This workflow is for developers, researchers, founders, and data analysts who want a historical dataset of GitHub Trending repositories without manual scraping. It’s ideal for building dashboards, newsletters, or trend analytics on top of a clean Supabase table. What problem is this workflow solving? Checking GitHub Trending by hand (daily/weekly/monthly) is repetitive and error-prone. This workflow automates collection, parsing, and storage so you can reliably track changes over time and query them from Supabase. What this workflow does Scrapes GitHub Trending across Daily, Weekly, and Monthly timeframes using FireCrawl. Extracts per-project fields: name, url, description, language, stars. Adds a type dimension (daily / weekly / monthly) to each row. Inserts structured results into a Supabase table for long-term storage. Setup Ensure you have an n8n instance (Cloud or self-hosted). Create credentials: FireCrawl API credential (no hardcoded keys in nodes). Supabase credential (URL + Service Role / Insert-capable key). Prepare a Supabase table (example): CREATE TABLE public.githubtrending ( id bigint GENERATED ALWAYS AS IDENTITY NOT NULL, created_at timestamp with time zone NOT NULL DEFAULT now(), data_date date DEFAULT now(), url text, project_id text, project_desc text, code_language text, stars bigint DEFAULT '0'::bigint, type text, CONSTRAINT githubtrending_pkey PRIMARY KEY (id) ); Import this workflow JSON into n8n. Run once to validate, then schedule (e.g., daily at 08:00).
by Lorena
This workflow exports a local CSV file to a JSON file.