by Paul
Agent XML System Message Engineering: Enabling Robust Enterprise Integration and Automation Why Creating System Messages in XML Is Important XML (Extensible Markup Language) engineering is a foundational technique in modern software and system architecture. It enables the structured creation, storage, and exchange of messages—such as system instructions, configuration, or logs—by providing a human-readable, platform-independent, and machine-processable format. Here’s why this matters and how big tech companies leverage it: Importance of XML in Engineering Standardization & Interoperability:** XML provides a consistent way to model and exchange data between different software components, no matter the underlying technology. This enables seamless integration of diverse systems, both internally within companies and externally across partners or clients. Traceability & Accountability:** By capturing not only the data but also its context (e.g., source, format, transformation steps), XML enables engineers to trace logic, troubleshoot issues, and ensure regulatory compliance. This is particularly crucial in sectors like finance, healthcare, and engineering where audit trails and documentation are mandatory. Configuration & Flexibility:** XML files are widely used for application settings. The clear hierarchical structure allows easy updates, quick testing of setups, and management of complex configurations—without deep developer intervention. Reusability & Automation:** Automating the creation of system messages or logs in XML allows organizations to reuse and adapt those messages for various systems or processes, reducing manual effort, errors, and improving scalability. How Big Tech Companies Use XML System Integration and Messaging:** Large enterprises including Amazon, Google, Microsoft, and SAP use XML for encoding, transporting, and processing data between distributed systems via web services (such as SOAP and REST APIs), often at web scale. Business Process Automation:** In supply chain management, e-commerce, and transactional processing, XML enables rapid, secure, and traceable information exchange—helping automate operations that cross organizational and geographical borders. Content Management & Transformation:** Companies use XML to manage and deliver dynamic content—such as translations, different document layouts, or multi-channel publishing—by separating data from its presentation and enabling real-time transformations through XSLT or similar technologies. Data Storage, Validation, and Big Data:** XML’s schema definitions (XSD) and well-defined structure are used by enterprises for validating and storing data models, supporting compatibility and quality across complex systems, including big data applications. Why XML System Message Engineering Remains Relevant > “XML is currently the most sophisticated format for distributed data — the World Wide Web can be seen as one huge XML database... Rapid adoption by industry [reinforces] that XML is no longer optional.” It brings consistency, scalability, and reliability to how software communicates, making development faster and systems more robust. Enterprises continue to use XML alongside newer formats (like JSON) wherever rich validation, structured messaging, and backward compatibility with legacy systems are required. In summary: XML engineering empowers organizations, especially tech giants, to build, scale, and manage complex digital ecosystems by facilitating integration, automation, traceability, and standardization of data and messages across their platforms, operations, and partners.
by Shahrear
Automatically process Construction Blueprints into structured Excel entries with VLM extraction >Disclaimer: This template uses community nodes, including the VLM Run node. It requires a self-hosted n8n instance and will not run on n8n Cloud. What this workflow does Monitors OneDrive for new blueprints in a target folder Downloads the file inside n8n for processing Sends the file to VLM Run for VLM analysis Fetches details from the construction.blueprint domain as JSON Appends normalized fields to an Excel sheet as a new row Setup Prerequisites: Microsoft account, VLM Run API credentials, OneDrive access, Excel Online, n8n. Install the verified VLM Run node by searching for VLM Run in the node list, then click Install. Once installed, you can start using it in your workflows. Quick Setup: Create the OneDrive folder you want to watch and copy its Folder ID OneDrive web: open the folder in your browser, then copy the value of the id= URL parameter. It is URL-encoded. Alternative in n8n: use a OneDrive node with the operation set to List to browse folders and copy the id field from the response. Create an Excel sheet with headers like: timestamp, file_name, file_id, mime_type, size_bytes, uploader_email, document_type, document_number, issue_date, author_name, drawing_title_numbers, revision_history, job_name, address, drawing_number, revision, drawn_by, checked_by, scale_information, agency_name, document_title, blueprint_id, blueprint_status, blueprint_owner, blueprint_url Configure OneDrive OAuth2 for the trigger and download nodes Use Microsoft OAuth2 in n8n. Approve requested scopes for file access and offline access when prompted. Test the connection by listing a known folder. Add VLM Run API credentials from https://app.vlm.run/dashboard to the VLM Run node Configure Excel Online OAuth2 and set Spreadsheet ID and target sheet tab Test by uploading a sample file to the watched OneDrive folder and activate Perfect for Converting uploaded construction blueprint documents into clean text Organizing extracted blueprint details into structured sheets Quickly accessing key attributes from technical files Centralized archive of blueprint-to-text conversions Key Benefits End to end automation** from OneDrive upload to structured Excel entry Accurate text extraction** of construction blueprint documents Organized attribute mapping** for consistent records Searchable archives** directly in Excel Hands-free processing** after setup How to customize Extend by adding: Version control that links revisions of the same drawing and highlights superseded rows Confidence scores per extracted field with threshold-based routing to manual or AI review Auto-generate a human-readable summary column for quick scanning of blueprint details Split large multi-sheet PDFs into per-drawing rows with individual attributes Cross-system sync to Procore, Autodesk Construction Cloud, or BIM 360 for project-wide visibility
by KlickTipp
Community Node Disclaimer: This workflow uses KlickTipp community nodes. Introduction This workflow automates Stripe checkout confirmations by capturing transaction data and syncing it into KlickTipp. Upon successful checkout, the contact's data is enriched with purchase details and tagged to trigger a personalized confirmation campaign in KlickTipp. Perfect for digital product sellers, course creators, and service providers seeking an end-to-end automated sales confirmation process. Benefits Instant confirmation emails**: Automatically notify customers upon successful checkout—no manual processing needed. Structured contact data**: Order data (invoice link, amount, transaction ID, products) is stored in KlickTipp custom fields. Smart campaign triggering**: Assign dynamic tags to start automated confirmation or fulfillment sequences. Seamless digital delivery**: Ideal for pairing with tools like Memberspot or Mentortools to unlock digital products post-checkout. Key Features Stripe Webhook Trigger**: Triggers on Triggers on Checkout Session.completed events events. Captures checkout data including product names, order number, and total amount. KlickTipp Contact Sync**: Adds or updates contacts in KlickTipp. Maps Stripe data into custom fields Assigns a tag such as Stripe Checkout to initiate a confirmation campaign. Router Logic (optional)**: Branches logic based on product ID or Stripe payment link. Enables product-specific campaigns or follow-ups. Setup Instructions KlickTipp Preparation Create the following custom fields in your KlickTipp account: | Field Name | Field Type | |--------------------------|------------------| | Stripe \| Products | Text | | Stripe \| Total | Decimal Number | | Stripe \| Payment ID | Text | | Stripe \| Receipt URL | URL | Define a tag for each product or confirmation flow, e.g., Order: Course XYZ. Credential Configuration Connect your Stripe account using an API key from the Stripe Dashboard. Authenticate your KlickTipp connection with username/password credentials (API access required). Field Mapping and Workflow Alignment Map Stripe output fields to the KlickTipp custom fields. Assign the tag to trigger your post-purchase campaign. Ensure that required data like email and opt-in info are present for the contact to be valid. Testing and Deployment Click on Inactive to activate the scenario. Perform a test payment using a Stripe product link. Verify in KlickTipp: The contact appears with email and opt-in status. Custom fields for Stripe are filled. The campaign tag is correctly applied and confirmation email is sent. ⚠️ Note: Use real or test-mode API keys in Stripe depending on your testing environment. Stripe events may take a few seconds to propagate. Campaign Expansion Ideas Launch targeted upsell flows based on the product tag. Use confirmation placeholders like: [[Stripe | Products]] [[Stripe | Total]] [[Stripe | Payment ID]] [[Stripe | Products]] Route customers to different product access portals (e.g., Memberspot, Mentortools). Send follow-up content over multiple days using KlickTipp sequences. Customization You can extend the scenario using a switch node to: Assign different tags per used payment link Branch into upsell or membership activation flows Chain additional automations like CRM entry, Slack notification, or invoice creation. Resources: Use KlickTipp Community Node in n8n Automate Workflows: KlickTipp Integration in n8n
by Harry Gunadi Permana
Get Forex Factory News Release to Telegram This n8n template demonstrates how to capture Actual Data Releases as quickly as possible for trading decisions. Use cases: Get notified if the actual data release is positive or negative for the relevant currency. Use the Telegram chat message about the news release as a trigger to open a trading position in MetaTrader 4. How it works A news release event acts as the trigger. Only news with a numerical Forecast value will be processed. Events that cannot be measured numerically (e.g., speeches) are ignored. Extract news details: currency, impact level (high/medium), release date, and news link. Wait 10 seconds to ensure the Actual value is available on the news page. Scrape the Actual value from the news link using Airtop. If the Actual value is not available, wait another 5 seconds and retry scraping. Extract both Actual and Forecast values from the scraped content. Remove non-numeric characters (%, K, M, B, T) and convert values to numbers. Determine the effect: If the Actual value is lower than the Forecast value (and lower is better), send it to the True branch. Otherwise, send it to the False branch. How to use Enter all required credentials. Run the workflow. Requirements Google Calendar credentials Airtop API key Telegram Chat ID Telegram Bot API token Need Help? Join the Discord or ask in the Forum! Thank you! Update Sept 26, 2025: Add new edit node
by Yaron Been
Generate Custom Text Content with IBM Granite 3.3 8B Instruct AI This workflow connects to Replicate’s API and uses the ibm-granite/granite-3.3-8b-instruct model to generate text. ✅ 🔵 SECTION 1: Trigger & Setup ⚙️ Nodes 1️⃣ On clicking 'execute' What it does:* Starts the workflow manually when you hit *Execute. Why it’s useful:** Perfect for testing text generation on-demand. 2️⃣ Set API Key What it does:* Stores your *Replicate API key** securely. Why it’s useful:** You don’t hardcode credentials into HTTP nodes — just set them once here. Beginner tip:** Replace YOUR_REPLICATE_API_KEY with your actual API key. 💡 Beginner Benefit ✅ No coding needed to handle authentication. ✅ You can reuse the same setup for other Replicate models. ✅ 🤖 SECTION 2: Model Request & Polling ⚙️ Nodes 3️⃣ Create Prediction (HTTP Request) What it does:* Sends a *POST request** to Replicate’s API to start a text generation job. Parameters include:** temperature, max_tokens, top_k, top_p. Why it’s useful:** Controls how creative or focused the AI text output will be. 4️⃣ Extract Prediction ID (Code) What it does:* Pulls the *prediction ID** and builds a URL for checking status. Why it’s useful:** Replicate jobs run asynchronously, so you need the ID to track progress. 5️⃣ Wait What it does:* Pauses for *2 seconds** before checking the prediction again. Why it’s useful:** Prevents spamming the API with too many requests. 6️⃣ Check Prediction Status (HTTP Request) What it does:* Polls the Replicate API for the *current status** (e.g., starting, processing, succeeded). Why it’s useful:** Lets you loop until the AI finishes generating text. 7️⃣ Check If Complete (IF Condition) What it does:* If the status is *succeeded, it goes to “Process Result.” Otherwise, it loops back to **Wait and retries. Why it’s useful:** Creates an automated polling loop without writing complex code. 💡 Beginner Benefit ✅ No need to manually refresh or check job status. ✅ Workflow keeps retrying until text is ready. ✅ Smart looping built-in with Wait + If Condition. ✅ 🟢 SECTION 3: Process & Output ⚙️ Nodes 8️⃣ Process Result (Code) What it does:* Collects the final *AI output**, status, metrics, and timestamps. Adds info like:** ✅ output → Generated text ✅ model → ibm-granite/granite-3.3-8b-instruct ✅ metrics → Performance data Why it’s useful:** Gives you a neat, structured JSON result that’s easy to send to Sheets, Notion, or any app. 💡 Beginner Benefit ✅ Ready-to-use text output. ✅ Easy integration with any database or CRM. ✅ Transparent metrics (when it started, when it finished, etc.). ✅✅✅ ✨ FULL FLOW OVERVIEW | Section | What happens | | ------------------------------ | ---------------------------------------------------------------------------- | | ⚡ Trigger & Setup | Start workflow + set Replicate API key. | | 🤖 Model Request & Polling | Send request → get Prediction ID → loop until job completes. | | 🟢 Process & Output | Extract clean AI-generated text + metadata for storage or further workflows. | 📌 How You Benefit Overall ✅ No coding needed — just configure your API key. ✅ Reliable polling — the workflow waits until results are ready. ✅ Flexible — you can extend output to Google Sheets, Slack, Notion, or email. ✅ Beginner-friendly — clean separation of input, process, and output. ✨ With this workflow, you’ve turned Replicate’s IBM Granite LLM into a no-code text generator — running entirely inside n8n! ✨
by John Pranay Kumar Reddy
🧩 Short Summary Proactively alert to service endpoint changes and pod/container issues (Pending, Not Ready, Restart spikes) using Prometheus metrics, formatted and sent to Slack. 🗂️ Category DevOps / Monitoring & Observability 🏷️ Tags kubernetes, prometheus, slack, alerting, sre, ops, kube-state-metrics ✅ Prerequisites Prometheus scraping kube-state-metrics v2.x. Slack App or Incoming Webhook (channel access). n8n instance with outbound access to Prometheus & Slack. 🔑 Required Credentials in n8n Slack: Bot OAuth (chat:write) or Incoming Webhook URL. (Optional) Prometheus Basic Auth (if your Prometheus needs it). 🧠 What This Template Does Detects pods stuck in Pending (scheduling problems like taints/affinity/capacity). Detects containers Not Ready (readiness probe failures). Detects container restart spikes over a sliding window (default 5 minutes). Detects service discovery changes (endpoint count diffs current vs previous snapshot). Sends clean, emoji-enhanced Slack alerts with pod/namespace/service context. Outputs a 5-minute summary block to reduce noise. 📣 Slack Message Style (examples)
by Harshil Agrawal
This workflow allows you to create a quote and a transfer, execute the transfer and get the information of the transfer using the Wise node. Wise node: This node will create a new quote in Wise. Wise1 node: This node will create a new transfer for the quote that we created in the previous node. Wise2 node: This node will execute the transfer that we created in the previous node. Wise3 node: This node will return the information of the transfer that we executed in the previous node.
by PDF Vector
Overview Conducting comprehensive literature reviews is one of the most time-consuming aspects of academic research. This workflow revolutionizes the process by automating literature search, paper analysis, and review generation across multiple academic databases. It handles both digital papers and scanned documents (PDFs, JPGs, PNGs), using OCR technology for older publications or image-based content. What You Can Do Automate searches across multiple academic databases simultaneously Analyze and rank papers by relevance, citations, and impact Generate comprehensive literature reviews with proper citations Process both digital and scanned documents with OCR Identify research gaps and emerging trends systematically Who It's For Researchers, graduate students, academic institutions, literature review teams, and academic writers who need to conduct comprehensive literature reviews efficiently while maintaining high quality and thoroughness. The Problem It Solves Manual literature reviews are extremely time-consuming and often miss relevant papers across different databases. Researchers struggle to synthesize large volumes of academic papers, track citations properly, and identify research gaps systematically. This template automates the entire process from search to synthesis, ensuring comprehensive coverage and proper citation management. Setup Instructions: Configure PDF Vector API credentials with academic search access Set up search parameters including databases and date ranges Define inclusion and exclusion criteria for paper selection Choose citation style (APA, MLA, Chicago, etc.) Configure output format preferences Set up reference management software integration if needed Define research topic and keywords for search Key Features: Simultaneous search across PubMed, arXiv, Semantic Scholar, and other databases Intelligent paper ranking based on citation count, recency, and relevance OCR support for scanned documents and older publications Automatic extraction of methodologies, findings, and limitations Citation network analysis to identify seminal works Automatic theme organization and research gap identification Multiple citation format support (APA, MLA, Chicago) Quality scoring based on journal impact factors Customization Options: Configure search parameters for specific research domains Set up automated searches for ongoing literature monitoring Integrate with reference management software (Zotero, Mendeley) Customize output format and structure Add collaborative review features for research teams Set up quality filters based on journal rankings Configure notification systems for new relevant papers Implementation Details: The workflow uses advanced algorithms to search multiple academic databases simultaneously, ranking papers by relevance and impact. It processes full-text PDFs when available and uses OCR for scanned documents. The system automatically extracts key information, organizes findings by themes, and generates structured literature reviews with proper citations and reference management. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.
by Samuel Heredia
Data Extraction from MongoDB Overview This workflow exposes a public HTTP GET endpoint to read all documents from a MongoDB collection, with: Strict validation of the collection name Error handling with proper 4xx codes Response formatting (e.g., _id → id) and a consistent 2XX JSON envelope Workflow Steps Webhook Trigger: *A public GET endpoint receives requests with the collection name as a parameter*. The workflow begins with a webhook that listens for incoming HTTP GET requests. The endpoint follows this pattern: https://{{your-n8n-instance}}/webhook-test/{{uuid>}}/:nameCollection The :nameCollection parameter is passed directly in the URL and specifies the MongoDB collection to be queried. Example: https://yourdomain.com/webhook-test/abcd1234/orders would attempt to fetch all documents from the orders collection. Validation: *The collection name is checked against a set of rules to prevent invalid or unsafe queries*. Before querying the database, the collection name undergoes validation using a regular expression: ^(?!system\.)[a-zA-Z0-9._]{1,120}$ Purpose of validation: Blocks access to MongoDB’s reserved system.* collections. Prevents injection attacks by ensuring only alphanumeric characters, underscores, and dots are allowed. Enforces MongoDB’s length restrictions (max 120 characters). This step ensures the workflow cannot be exploited with malicious input. Conditional Check: *If the validation fails, the workflow stops and returns an error message. If it succeeds, it continues.* The workflow checks if the collection name passes validation. If valid ✅: proceeds to query MongoDB. If invalid ❌: immediately returns a structured HTTP 400 response, adhering to RESTful standards: { "code": 400, "message": "{{ $json.message }}" } MongoDB Query: *The workflow connects to MongoDB and retrieves all documents from the specified collection.* To use the MongoDB node, a proper database connection must be configured in n8n. This is done through MongoDB Credentials in the node settings: Create MongoDB Credentials in n8n Go to n8n → Credentials → New. Select MongoDB and Fill in the following fields: Host: The MongoDB server hostname or IP (e.g., cluster0.mongodb.net). Port: Default is 27017 for local deployments. Database: Name of the database (e.g., myDatabase). User: MongoDB username with read permissions. Password: Corresponding password. Connection Type: Standard for most cases, or Connection String if using a full URI. Replica Set / SRV Record: Enable if using MongoDB Atlas or a replica cluster. Using a Connection String (recommended for MongoDB Atlas) Example URI: mongodb+srv://<username>:<password>@cluster0.mongodb.net/myDatabase?retryWrites=true&w=majority Paste this into the Connection String field when selecting "Connection String" as the type. Verify the Connection After saving, test the credentials to confirm n8n can connect successfully to your MongoDB instance. Configure the MongoDB Node in the Workflow Operation: Find (to fetch documents). Collection: Dynamic value passed from the workflow (e.g., {{$json["nameCollection"]}}). Query: Leave empty to fetch all documents, or define filters if needed. Result: The MongoDB node will retrieve all documents from the specified collection and pass the dataset as JSON to the next node for processing. Data Formatting: *The retrieved documents are processed to adjust field names.* By default, MongoDB returns its unique identifier as _id. To align with common API conventions, this step renames _id → id. This small transformation simplifies downstream usage, making responses more intuitive for client applications. Response: *The cleaned dataset is returned as a structured JSON response to the original request.* The processed dataset is returned as the response to the original HTTP request. Clients receive a clean JSON payload with the expected format and renamed identifiers. Example response: [ { "id": "64f13c1e2f1a5e34d9b3e7f0", "name": "John Doe", "email": "john@example.com" }, { "id": "64f13c1e2f1a5e34d9b3e7f1", "name": "Jane Smith", "email": "jane@example.com" } ] Workflow Summary Webhook (GET) → Code (Validation) → IF (Validation Check) → MongoDB (Query) → Code (Transform IDs) → Respond to Webhook Key Benefits ✅ Security-first design: prevents unauthorized access or injection attacks. ✅ Standards compliance: uses HTTP status codes (400) for invalid requests. ✅ Clean API response: transforms MongoDB’s native _id into a more user-friendly id. ✅ Scalability: ready for integration with any frontend, third-party service, or analytics pipeline.
by vinci-king-01
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This automated workflow monitors stock prices by scraping real-time data from Yahoo Finance. It uses a scheduled trigger to run at specified intervals, extracts key stock metrics using AI-powered extraction, formats the data through a custom code node, and automatically saves the structured information to Google Sheets for tracking and analysis. Key Steps: Scheduled Trigger**: Runs automatically at specified intervals to collect fresh stock data AI-Powered Scraping**: Uses ScrapeGraphAI to intelligently extract stock information (symbol, current price, price change, change percentage, volume, and market cap) from Yahoo Finance Data Processing**: Formats extracted data through a custom Code node for optimal spreadsheet compatibility and handles both single and multiple stock formats Automated Storage**: Saves all stock data to Google Sheets with proper column mapping for easy filtering, analysis, and historical tracking Set up steps Setup Time: 5-10 minutes Configure Credentials: Set up your ScrapeGraphAI API key and Google Sheets OAuth2 credentials Customize Target: Update the website URL in the ScrapeGraphAI node to your desired stock symbol (currently set to AAPL) Configure Schedule: Set your preferred trigger frequency (daily, hourly, etc.) for stock price monitoring Map Spreadsheet: Connect to your Google Sheets document and configure column mapping for the stock data fields Pro Tips: Keep detailed configuration notes in the sticky notes within the workflow Test with a single stock first before scaling to multiple stocks Consider modifying the Code node to handle different stock symbols or add additional data fields Perfect for building a historical database of stock performance over time Can be extended to track multiple stocks by modifying the ScrapeGraphAI prompt
by vinci-king-01
AI-Powered Stock Tracker with Yahoo Finance & Google Sheets ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This automated workflow monitors stock prices by scraping real-time data from Yahoo Finance. It uses a scheduled trigger to run at specified intervals, extracts key stock metrics using AI-powered extraction, formats the data through a custom code node, and automatically saves the structured information to Google Sheets for tracking and analysis. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or cloud) ScrapeGraphAI community node installed Google Sheets API access Yahoo Finance access (no API key required) Required Credentials ScrapeGraphAI API Key** - For web scraping capabilities Google Sheets OAuth2** - For spreadsheet integration Google Sheets Setup Create a Google Sheets document with the following column structure: | Column A | Column B | Column C | Column D | Column E | Column F | Column G | |----------|----------|----------|----------|----------|----------|----------| | symbol | current_price | change | change_percent | volume | market_cap | timestamp | | AAPL | 225.50 | +2.15 | +0.96% | 45,234,567 | 3.45T | 2024-01-15 14:30:00 | How it works This automated workflow monitors stock prices by scraping real-time data from Yahoo Finance. It uses a scheduled trigger to run at specified intervals, extracts key stock metrics using AI-powered extraction, formats the data through a custom code node, and automatically saves the structured information to Google Sheets for tracking and analysis. Key Steps: Scheduled Trigger**: Runs automatically at specified intervals to collect fresh stock data AI-Powered Scraping**: Uses ScrapeGraphAI to intelligently extract stock information (symbol, current price, price change, change percentage, volume, and market cap) from Yahoo Finance Data Processing**: Formats extracted data through a custom Code node for optimal spreadsheet compatibility and handles both single and multiple stock formats Automated Storage**: Saves all stock data to Google Sheets with proper column mapping for easy filtering, analysis, and historical tracking Set up steps Setup Time: 5-10 minutes Configure Credentials: Set up your ScrapeGraphAI API key and Google Sheets OAuth2 credentials Customize Target: Update the website URL in the ScrapeGraphAI node to your desired stock symbol (currently set to AAPL) Configure Schedule: Set your preferred trigger frequency (daily, hourly, etc.) for stock price monitoring Map Spreadsheet: Connect to your Google Sheets document and configure column mapping for the stock data fields Node Descriptions Core Workflow Nodes: Schedule Trigger** - Initiates the workflow at specified intervals Yahoo Finance Stock Scraper** - Extracts real-time stock data using ScrapeGraphAI Stock Data Formatter** - Processes and formats extracted data for spreadsheet compatibility Google Sheets Stock Logger** - Saves formatted stock data to your spreadsheet Data Flow: Trigger → Scraper → Formatter → Logger Customization Examples Track Multiple Stocks // In the ScrapeGraphAI node, modify the URL to track different stocks: const stockSymbols = ['AAPL', 'GOOGL', 'MSFT', 'TSLA']; const baseUrl = 'https://finance.yahoo.com/quote/'; Add Additional Data Fields // In the Code node, extend the data structure: const extendedData = { ...stockData, pe_ratio: extractedData.pe_ratio, dividend_yield: extractedData.dividend_yield, day_range: extractedData.day_range }; Custom Scheduling // Modify the Schedule Trigger for different frequencies: // Daily at 9:30 AM (market open): "0 30 9 * * *" // Every 15 minutes during market hours: "0 */15 9-16 * * 1-5" // Weekly on Monday: "0 0 9 * * 1" Data Output Format The workflow outputs structured JSON data with the following fields: { "symbol": "AAPL", "current_price": "225.50", "change": "+2.15", "change_percent": "+0.96%", "volume": "45,234,567", "market_cap": "3.45T", "timestamp": "2024-01-15T14:30:00Z" } Troubleshooting Common Issues ScrapeGraphAI Rate Limits - Implement delays between requests Yahoo Finance Structure Changes - Update scraping prompts Google Sheets Permission Errors - Verify OAuth2 credentials and document permissions Performance Tips Use appropriate trigger intervals (avoid excessive scraping) Implement error handling for network issues Consider data validation before saving to sheets Pro Tips: Keep detailed configuration notes in the sticky notes within the workflow Test with a single stock first before scaling to multiple stocks Consider modifying the Code node to handle different stock symbols or add additional data fields Perfect for building a historical database of stock performance over time Can be extended to track multiple stocks by modifying the ScrapeGraphAI prompt
by Cai Yongji
GitHub Trending to Supabase (Daily, Weekly, Monthly) Who is this for? This workflow is for developers, researchers, founders, and data analysts who want a historical dataset of GitHub Trending repositories without manual scraping. It’s ideal for building dashboards, newsletters, or trend analytics on top of a clean Supabase table. What problem is this workflow solving? Checking GitHub Trending by hand (daily/weekly/monthly) is repetitive and error-prone. This workflow automates collection, parsing, and storage so you can reliably track changes over time and query them from Supabase. What this workflow does Scrapes GitHub Trending across Daily, Weekly, and Monthly timeframes using FireCrawl. Extracts per-project fields: name, url, description, language, stars. Adds a type dimension (daily / weekly / monthly) to each row. Inserts structured results into a Supabase table for long-term storage. Setup Ensure you have an n8n instance (Cloud or self-hosted). Create credentials: FireCrawl API credential (no hardcoded keys in nodes). Supabase credential (URL + Service Role / Insert-capable key). Prepare a Supabase table (example): CREATE TABLE public.githubtrending ( id bigint GENERATED ALWAYS AS IDENTITY NOT NULL, created_at timestamp with time zone NOT NULL DEFAULT now(), data_date date DEFAULT now(), url text, project_id text, project_desc text, code_language text, stars bigint DEFAULT '0'::bigint, type text, CONSTRAINT githubtrending_pkey PRIMARY KEY (id) ); Import this workflow JSON into n8n. Run once to validate, then schedule (e.g., daily at 08:00).