by Oneclick AI Squad
This automated n8n workflow continuously monitors airline schedule changes by fetching real-time flight data, comparing it with stored schedules, and instantly notifying both internal teams and affected passengers through multiple communication channels. The system ensures stakeholders are immediately informed of any flight delays, cancellations, gate changes, or other critical updates. Good to Know Flight data accuracy depends on the aviation API provider's update frequency and reliability Critical notifications (cancellations, major delays) trigger immediate passenger alerts via SMS and email Internal Slack notifications keep operations teams informed in real-time Database logging maintains a complete audit trail of all schedule changes The system processes only confirmed schedule changes to avoid false notifications Passenger notifications are sent only to those with confirmed tickets for affected flights How It Works Schedule Trigger - Automatically runs every 30 minutes to check for flight schedule updates Fetch Airline Data - Retrieves current flight information from aviation APIs Get Current Schedules - Pulls existing schedule data from the internal database Process Changes - Compares API data with database records to identify schedule changes Check for Changes - Determines if any updates require processing and notifications Update Database - Saves schedule changes to the internal flight database Notify Slack Channel - Sends operational updates to the flight operations team Check Urgent Notifications - Identifies critical changes requiring immediate passenger alerts Get Affected Passengers - Retrieves contact information for passengers on changed flights Send Email Notifications - Dispatches detailed schedule change emails via SendGrid Send SMS (Critical Only) - Sends urgent text alerts for cancellations and major delays Update Internal Systems - Syncs changes with other airline systems via webhooks Log Sync Activity - Records all synchronization activities for audit and monitoring Data Sources The workflow integrates with multiple data sources and systems: Aviation API (Primary Data Source) Real-time flight status and schedule data Departure/arrival times, gates, terminals Flight status (on-time, delayed, cancelled, diverted) Aircraft and route information Internal Flight Database flight_schedules table - Current schedule data with columns: flight_number (text) - Flight identifier (e.g., "AA123") departure_time (timestamp) - Scheduled departure time arrival_time (timestamp) - Scheduled arrival time status (text) - Flight status (active, delayed, cancelled, diverted) gate (text) - Departure gate number terminal (text) - Terminal identifier airline_code (text) - Airline IATA code origin_airport (text) - Departure airport code destination_airport (text) - Arrival airport code aircraft_type (text) - Aircraft model updated_at (timestamp) - Last update timestamp created_at (timestamp) - Record creation timestamp passengers table - Passenger contact information with columns: passenger_id (integer) - Unique passenger identifier name (text) - Full passenger name email (text) - Email address for notifications phone (text) - Mobile phone number for SMS alerts notification_preferences (json) - Communication preferences created_at (timestamp) - Registration timestamp updated_at (timestamp) - Last profile update tickets table - Booking and ticket status with columns: ticket_id (integer) - Unique ticket identifier passenger_id (integer) - Foreign key to passengers table flight_number (text) - Flight identifier flight_date (date) - Travel date seat_number (text) - Assigned seat ticket_status (text) - Status (confirmed, cancelled, checked-in) booking_reference (text) - Booking confirmation code fare_class (text) - Ticket class (economy, business, first) created_at (timestamp) - Booking timestamp updated_at (timestamp) - Last modification timestamp sync_logs table - Audit trail and system logs with columns: log_id (integer) - Unique log identifier workflow_name (text) - Name of the workflow that created the log total_changes (integer) - Number of schedule changes processed sync_status (text) - Status (completed, failed, partial) sync_timestamp (timestamp) - When the sync occurred details (json) - Detailed log information and changes error_message (text) - Error details if sync failed execution_time_ms (integer) - Processing time in milliseconds Communication Channels Slack - Internal team notifications SendGrid - Passenger email notifications Twilio - Critical SMS alerts Internal webhooks - System integrations How to Use Import the workflow into your n8n instance Configure aviation API credentials (AviationStack, FlightAware, or airline-specific APIs) Set up PostgreSQL database connection with required tables Configure Slack bot token for operations team notifications Set up SendGrid API key and email templates for passenger notifications Configure Twilio credentials for SMS alerts (critical notifications only) Test with sample flight data to verify all notification channels Adjust monitoring frequency and severity thresholds based on operational needs Monitor sync logs to ensure reliable data synchronization Requirements API Access Aviation data provider (AviationStack, FlightAware, etc.) SendGrid account for email delivery Twilio account for SMS notifications Slack workspace and bot token Database Setup PostgreSQL database with flight schedule tables Passenger and ticket management tables Audit logging tables for tracking changes Infrastructure n8n instance with appropriate node modules Reliable internet connection for API calls Proper credential management and security Customizing This Workflow Modify the Process Changes node to adjust change detection sensitivity, add custom business rules, or integrate additional data sources like weather or airport operational data. Customize notification templates in the email and SMS nodes to match your airline's branding and communication style. Adjust the Schedule Trigger frequency based on your operational requirements and API rate limits.
by Manu
In Grist, when I mark a row as confirmed (via a toggle): a webhook is set up to notify n8n, and this workflow will create derived records in the destination table. Design decisions Confirmation-based In the source table there is a boolean column "Confirmed" that will trigger the transfer. This way there is a manual check involved & it's a conscious step to trigger the workflow. Runs once If the destination table already contains an entry, we will not re-create/update it (as it might've already been changed manually) Setup Create a boolean column Confirmed in source table Add a webhook in Grist Settings Add grist API credentials in n8n Set document ID & source table ID/Name in the 'get existing' node Set docID, the destination table ID/Name - and the columns & values you want in the Create Row node
by Don Jayamaha Jr
⏱️ Analyze Tesla (TSLA) short-term market structure and momentum using 6 technical indicators on the 15-minute timeframe. This AI agent tool is part of the Tesla Quant Trading AI Agent system. It is designed to detect intraday shifts in volatility, trend strength, and potential reversal signals. ⚠️ Not standalone. This agent is triggered via Execute Workflow by the Tesla Financial Market Data Analyst Tool. 🔌 Requires: Tesla Quant Technical Indicators Webhooks Tool Alpha Vantage Premium API Key 📊 What It Does This workflow pulls the latest 20 data points for 6 key technical indicators from a webhook-powered source, then uses GPT-4.1 to interpret market momentum and structure: Connected Indicators: RSI (Relative Strength Index)** MACD (Moving Average Convergence Divergence)** BBANDS (Bollinger Bands)** SMA (Simple Moving Average)** EMA (Exponential Moving Average)** ADX (Average Directional Index)** The output is a structured JSON with: Market summary Timeframe (15m) Indicator values 📋 Sample Output { "summary": "TSLA shows fading momentum. RSI dropped below 60, MACD is flattening, and BBANDS are tightening. Expect short-term consolidation.", "timeframe": "15m", "indicators": { "RSI": 58.3, "MACD": { "macd": -0.020, "signal": -0.018, "histogram": -0.002 }, "BBANDS": { "upper": 183.10, "lower": 176.70, "middle": 179.90, "close": 177.60 }, "SMA": 178.20, "EMA": 177.70, "ADX": 19.6 } } 🧠 Agent Components | Module | Role | | --------------------- | -------------------------------------------------------- | | Webhook Data Node | Calls /15minData endpoint for Alpha Vantage indicators | | LangChain Agent | Parses indicator payloads and generates reasoning | | OpenAI GPT-4.1 | Powers the AI logic to interpret technical structure | | Memory Module | Maintains session consistency for multi-agent calls | 🛠️ Setup Instructions Import Workflow into n8n Name it: Tesla_15min_Indicators_Tool Configure Webhook Source Install and publish: Tesla_Quant_Technical_Indicators_Webhooks_Tool Ensure /15minData is publicly reachable (or tunnel-enabled) Add Credentials Alpha Vantage API Key (HTTP Query Auth) OpenAI GPT-4.1 (OpenAI Chat Model) Link as Sub-Agent This workflow is not triggered manually. It is executed using Execute Workflow by: 👉 Tesla_Financial_Market_Data_Analyst_Tool Pass in: message (optional) sessionId (for short-term memory linkage) 📌 Sticky Notes Summary 🟢 Trigger Integration – Receives sessionId and message from parent 🟡 Webhook Fetcher – Pulls Alpha Vantage data from /15minData 🧠 GPT-4.1 Reasoning – Produces structured JSON insight 🔵 Session Memory – Maintains evaluation flow across tools 📘 Tool Description – Explains indicator use and AI output format 🔒 Licensing & Author © 2025 Treasurium Capital Limited Company All logic, formatting, and agent design are protected under copyright. No resale or public re-use without permission. Created by: Don Jayamaha Creator Profile: https://n8n.io/creators/don-the-gem-dealer/ 🚀 Build faster intraday Tesla trading models using clean 15-minute indicator insights—processed by AI. Required by the Tesla Financial Market Data Analyst Tool.
by Ranjan Dailata
Who this is for? Extract & Summarize Indeed Company Info is an automated workflow that extracts the Indeed company profile information using Bright Data Web Unlocker, transform it using Google Gemini’s LLM, and forward the transformed response with the summary to a specified webhook for downstream use. This workflow is tailored for: Recruiters and HR teams looking to assess companies quickly during talent sourcing. Job seekers researching potential employers and needing summarized company insights. Market researchers and analysts monitoring competitor or industry players. What problem is this workflow solving? Searching and evaluating company profiles on Indeed manually can be time-consuming and inefficient, especially when dealing with large volumes of companies. Manually browsing, copying, and summarizing company descriptions, reviews, and ratings from Indeed hinders productivity and limits real-time insights. This workflow solves this by: Automating the extraction of company details from Indeed using Bright Data Web Unlocker. Summarizing the raw data using Google Gemini's language model for a quick, human-readable overview. Sending the transformed response with the summary to a chosen endpoint, like Slack, Notion, Airtable, or a custom webhook. What this workflow does This automated pipeline does the following: Scrape Indeed company profile pages (e.g., ratings, description, reviews) using Bright Data’s Web Unlocker. Transform the scraped content into structured JSON using n8n’s built-in tools. Summarize and extract meaningful insights using Google Gemini's large language model. Forward the summarized data to a specified webhook or app for real-time access, storage, or analysis. Forward the formatted response to a specified webhook or app for real-time access, storage, or analysis. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the search query, Bright Data zone by navigating to the Set Indeed Search Query node. Update the Webhook Notifier with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a company or a market researcher, entrepreneur, or data analyst. Here’s how you can adapt it to fit your specific use case: Changing the data source**: Replace the Indeed search input with other job or business listing platforms if needed (e.g., Glassdoor, Crunchbase) Refining the LLM prompt**: Tailor the Gemini prompt to transform or summarize the Indeed company information in a specific format. Routing the output to different destinations**: Send summaries or transformed response to Google Sheets, Airtable, or CRMs like HubSpot or Salesforce etc.
by Adam Bertram
An AI-powered chat assistant that analyzes Azure virtual machine activity and generates detailed timeline reports showing VM state changes, performance metrics, and operational events over time. How It Works The workflow starts with a chat trigger that accepts user queries about Azure VM analysis. A Google Gemini AI agent processes these requests and uses six specialized tools to gather comprehensive VM data from Azure APIs. The agent queries resource groups, retrieves VM configurations and instance views, pulls performance metrics (CPU, network, disk I/O), and collects activity log events. It then analyzes this data to create timeline reports showing what happened to VMs during specified periods, defaulting to the last 90 days unless the user specifies otherwise. Prerequisites To use this template, you'll need: n8n instance (cloud or self-hosted) Azure subscription with virtual machines Microsoft Azure Monitor OAuth2 API credentials Google Gemini API credentials Proper Azure permissions to read VM data and activity logs Setup Instructions Import the template into n8n. Configure credentials: Add Microsoft Azure Monitor OAuth2 API credentials with read permissions for VMs and activity logs Add Google Gemini API credentials Update workflow parameters: Open the "Set Common Variables" node Replace <your azure subscription id here> with your actual Azure subscription ID Configure triggers: The chat trigger will automatically generate a webhook URL for receiving chat messages No additional trigger configuration needed Test the setup to ensure it works. Security Considerations Use minimum required Azure permissions (Reader role on subscription or resource groups). Store API credentials securely in n8n credential store. The Azure Monitor API has rate limits, so avoid excessive concurrent requests. Chat sessions use session-based memory that persists during conversations but doesn't retain data between separate chat sessions. Extending the Template You can add more Azure monitoring tools like disk metrics, network security group logs, or Application Insights data. The AI agent can be enhanced with additional tools for Azure cost analysis, security recommendations, or automated remediation actions. You could also integrate with alerting systems or export reports to external storage or reporting platforms.
by Ranjan Dailata
Who this is for? The Automate Etsy Data Mining with Bright Data Scrape & Google Gemini workflow is designed for eCommerce analysts, product researchers, and AI developers seeking to extract actionable insights from Etsy listings at scale. It is ideal for: eCommerce Entrepreneurs** - Researching product demand and competition. Market Analysts** - Tracking pricing, reviews, and trends across Etsy categories. Product Managers** - Identifying niche opportunities and design inspirations. Data Scientists & AI Engineers** - Automating product intelligence pipelines. Growth Hackers** - Leveraging Etsy insights to refine product-market fit. What problem is this workflow solving? Manually browsing Etsy to analyze product listings, pricing, reviews, and seller activity is slow, inconsistent, and unscalable. Scraping Etsy requires unlocking JavaScript-heavy content and structuring noisy data for analysis. This workflow solves: Automated and scalable scraping of Etsy product listings using Bright Data’s infrastructure. A fully paginated data structured Estry production data extraction via the Google Gemini LLM. Enables faster decision-making for product research and competitive analysis via the fully automated paginated data extraction. What this workflow does Receives input: Sets the Esty URL for the data extraction and analysis. Uses Bright Data's Web Unlocker to extract content from relevant sites. Cleans and preprocesses the scraped content for readability. Sends the content to Google Gemini for: Enriched results including: Data persistence over the disk. Sends the response to a target system via Webhook notification. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Set Esty Search Query for setting the brand content URL and the Bright Data Zone name. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Input Sources** : Replace the static URL with dynamic input from Google Sheets, Webhook, or Airtable to research multiple niches. Prompt Customization** : Adjust Gemini prompts to extract specific insights for example: List key features of the product Summarization of the review themes Data Output Options** : Update the Webhook notification to save data to: Google Sheets Notion or Airtable SQL/NoSQL Slack/Email
by Yang
👥 Who is this for? This workflow is ideal for virtual assistants, researchers, developers, automation specialists, and data analysts who need to regularly extract and organize structured product information (like books) from a website. It’s especially useful for those working with catalog-based websites who want to automate extraction and delivery of clean, sorted data. 🧩 What problem is this solving? Manually copying product listings like book titles and prices from a website into a spreadsheet is slow and repetitive. This automation solves that problem by scraping content using Dumpling AI, extracting the right data using CSS selectors, and formatting it into a clean CSV file that is sent to your email—all triggered automatically when a new URL is added to Google Sheets. ⚙️ What this workflow does This template automates an entire content scraping and delivery process: Watches a Google Sheet for new URLs Scrapes the HTML content of the given webpage using Dumpling AI Uses CSS selectors in the HTML node to extract each book from the page Splits the HTML array into individual items Extracts the book title and price from each HTML block Sorts the books in descending order based on price Converts the sorted data to a CSV file Sends the CSV via email using Gmail 🛠️ Setup Google Sheets Create a sheet titled something like URLs Add your product listing URLs (e.g., http://books.toscrape.com) Connect the Google Sheets trigger node to your sheet Ensure you have proper credentials connected Dumpling AI Create an account at Dumpling AI) - Generate your API key Set the HTTP Method to POST and pass the URL dynamically from the Google Sheet Use Header Auth to include your API key in the request header Make sure "cleaned": "True" is included in the body for optimized HTML output HTML Node The first HTML node extracts the main book container blocks using: .row > li The second HTML node parses out the individual fields: title: h3 > a (via the title attribute) price: .price_color Sort Node Sorts books by price in descending order Note: price is extracted as a string, ensure it's parsable if you plan to use numeric filtering later Convert to CSV The JSON data is passed into a Convert node and transformed into a CSV file Gmail Sends the CSV as an attachment to a designated email 🔄 How to customize this workflow Extract more data**: Add more CSS selectors in the second HTML node to pull fields like author, availability, or product links Switch destinations**: Replace Gmail with Slack, Google Drive, Dropbox, or another platform Adjust sorting**: Sort alphabetically or based on another extracted value Use a different source**: As long as the site structure is consistent, this can scrape any listing-like page Trigger differently**: Use a webhook, form submission, or schedule trigger instead of Google Sheets ⚠️ Dependencies and Notes This workflow uses Dumpling AI to perform the web scraping. This requires an API key and uses credits per request. The HTML node depends on valid CSS selectors. If the site layout changes, the selectors may need to be updated. Ensure you’re not scraping content from websites that prohibit automated scraping.
by Oneclick AI Squad
This n8n workflow automates the process of scraping LinkedIn profiles using the Apify platform and organizing the extracted data into Google Sheets for easy analysis and follow-up. Use Cases Lead Generation**: Extract contact information and professional details from LinkedIn profiles Recruitment**: Gather candidate information for talent acquisition Market Research**: Analyze professional networks and industry connections Sales Prospecting**: Build targeted prospect lists with detailed professional information How It Works 1. Workflow Initialization & Input Webhook Start Scraper**: Triggers the entire scraping workflow Read LinkedIn URLs**: Retrieves LinkedIn profile URLs from Google Sheets Schedule Scraper Trigger**: Sets up automated scheduling for regular scraping 2. Data Processing & Extraction Data Formatting**: Prepares and structures the LinkedIn URLs for processing Fetch Profile Data**: Makes HTTP requests to Apify API with profile URLs Run Scraper Actor**: Executes the Apify LinkedIn scraper actor Get Scraped Results**: Retrieves the extracted profile data from Apify 3. Data Storage & Completion Save to Google Sheets**: Stores the scraped profile data in organized spreadsheet format Update Progress Tracker**: Updates workflow status and progress tracking Process Complete Wait**: Ensures all operations finish before final steps Send Success Notification**: Alerts users when scraping is successfully completed Requirements Apify Account Active Apify account with sufficient credits API token for authentication Access to LinkedIn Profile Scraper actor Google Sheets Google account with Sheets access Properly formatted input sheet with LinkedIn URLs Credentials configured in n8n n8n Setup HTTP Request node credentials for Apify Google Sheets node credentials Webhook endpoint configured How to Use Step 1: Prepare Your Data Create a Google Sheet with LinkedIn profile URLs Ensure the sheet has a column named 'linkedin_url' Add any additional columns for metadata (name, company, etc.) Step 2: Configure Credentials Set up Apify API credentials in n8n Configure Google Sheets authentication Update webhook endpoint URL Step 3: Customize Settings Adjust scraping parameters in the Apify node Modify data fields to extract based on your needs Set up notification preferences Step 4: Execute Workflow Trigger via webhook or manual execution Monitor progress through the workflow Check Google Sheets for scraped data Review completion notifications Good to Know Rate Limits**: LinkedIn scraping is subject to rate limits. The workflow includes delays to respect these limits. Data Quality**: Results depend on profile visibility and LinkedIn's anti-scraping measures. Costs**: Apify charges based on compute units used. Monitor your usage to control costs. Compliance**: Ensure your scraping activities comply with LinkedIn's Terms of Service and applicable laws. Customizing This Workflow Enhanced Data Processing Add data enrichment steps to append additional information Implement duplicate detection and merge logic Create data validation rules for quality control Advanced Notifications Set up Slack or email alerts for different scenarios Create detailed reports with scraping statistics Implement error recovery mechanisms Integration Options Connect to CRM systems for automatic lead creation Integrate with marketing automation platforms Export data to analytics tools for further analysis Troubleshooting Common Issues Apify Actor Failures**: Check API limits and actor status Google Sheets Errors**: Verify permissions and sheet structure Rate Limiting**: Implement longer delays between requests Data Quality Issues**: Review scraping parameters and target profiles Best Practices Test with small batches before scaling up Monitor Apify credit usage regularly Keep backup copies of your data Regular validation of scraped information accuracy
by Roninimous
This workflow integrates iOS Shortcuts with n8n to create a simple, automatic location-based reminder system. When the user arrives at a specified location, an automation in the Shortcuts app sends a webhook trigger to n8n. If the trigger matches predefined date and time conditions, n8n sends a Telegram message reminder to the user. This is perfect for repetitive weekly tasks like taking out the bins, customized with conditions for day and time. Key Features Location-Based Trigger: Uses iOS Shortcuts automation to start the workflow upon arrival at a specific location. Time and Day Validation: Logic in n8n checks current weekday and time to ensure reminders are sent only when appropriate. Telegram Integration: Sends reminders directly to your Telegram account using your bot. Minimal Setup: Uses native iOS and simple webhook setup in n8n. How It Works iOS Shortcut Trigger: When the user arrives at a designated location, the iOS shortcut sends a GET request to the n8n webhook. n8n Webhook Node: Receives the request and triggers the workflow. Conditional Check: An IF node checks if the current time is after 4:00 PM and it's a Wednesday (or any other configured condition). Telegram Node: If the condition passes, n8n sends a message like "Don't forget to take the bins out." to your Telegram bot. Setup Instructions Create a Telegram Bot: Use @BotFather to create a bot and obtain your bot token. Add Telegram API credentials in n8n with your bot token. Setup iOS Shortcut: Open the Shortcuts app on your iPhone. Go to the Automation tab → Tap + → Create Personal Automation. Choose Arrive → Select a location. Add action: Get Contents of URL. Method: GET, URL: your n8n Webhook URL (e.g. https://n8n.yourdomain.com/webhook/your-path). Save the automation. (You can also test the automation by pressing the Play button) Import Workflow into n8n: Load the provided workflow JSON. Set your webhook path and Telegram credentials. Adjust the logic in the IF node to your usecase. In my case, I check if today is Wednesday and after 4 PM until Midnight. Expose n8n Publicly: Ensure your n8n instance is publicly accessible via HTTPS so the shortcut can reach it. Customization Guidance Change Reminder Message: Modify the text inside the Telegram node to suit different reminders. Add More Conditions: Extend the logic to support more days, hours, or different trigger messages. Add Multi-Channel Output: Send reminders via email, SMS, or Slack in addition to Telegram. Use More Triggers: Expand to other types of shortcut triggers (e.g. NFC tag, leaving location, time of day). Security and Implementation Webhook Protection: Avoid using easily guessable webhook URLs. Secure Telegram Token: Store your bot token securely in n8n credentials, not in plain workflow text. Limit Shortcut Scope: Only trigger the shortcut at trusted locations or with secure iCloud sync. Automation Permissions: Ensure your iPhone allows shortcut automations to run without confirmation. Benefits Automates repetitive location-based reminders without user interaction. Provides a lightweight, native solution using iOS and n8n with no extra apps. Keeps you on track for routine tasks like garbage days, medicine reminders, or arrival-based tasks. Easily extendable for multiple locations or trigger conditions.
by Don Jayamaha Jr
A short-term technical analysis agent for 15-minute candles on Binance Spot Market pairs. Calculates and interprets key trading indicators (RSI, MACD, BBANDS, ADX, SMA/EMA) and returns structured summaries, optimized for Telegram or downstream AI trading agents. This tool is designed to be triggered by another workflow (such as the Binance SM Financial Analyst Tool or Binance Quant AI Agent) and is not intended for standalone use. 🔧 Key Features ⏱️ Uses 15-minute kline data (last 100 candles) 📈 Calculates: RSI, MACD, Bollinger Bands, SMA/EMA, ADX 🧠 Interprets numeric data using GPT-4.1-mini 📤 Outputs concise, formatted analysis like: • RSI: 72 → Overbought • MACD: Cross Up • BB: Expanding • ADX: 34 → Strong Trend 🧠 AI Agent Purpose > You are a short-term analysis tool for spotting volatility, early breakouts, and scalping setups. Used by higher agents to determine: Entry/exit precision Momentum shifts Scalping opportunities ⚙️ How it Works Triggered externally by another workflow Accepts input: { "message": "BTCUSDT", "sessionId": "123456789" } Sends POST request to backend endpoint: https://treasurium.app.n8n.cloud/webhook/15m-indicators Fetches last 100 candles and calculates indicators Passes data to GPT for interpretation Returns summary with indicator tags for human readability 🔗 Dependencies This tool is triggered by: ✅ Binance SM Financial Analyst Tool ✅ Binance Spot Market Quant AI Agent 🚀 Setup Instructions Import into your n8n instance Make sure /15m-indicators webhook is active and calculates indicators correctly Connect your OpenAI GPT-4.1-mini credentials Trigger from upstream agent with Binance symbol and session ID Ensure all external calls (to Binance + webhook) are working 🧪 Example Use Cases | Use Case | Result | | ------------------------------------- | --------------------------------------- | | Short-term trade decision for ETHUSDT | Receives 15m signal indicators summary | | Input from Financial Analyst Tool | Returns real-time volatility snapshot | | Telegram bot asks for “DOGE update” | Returns momentum indicators in 15m view | 🎥 Watch Tutorial: 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding or resale permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Ranjan Dailata
Who this is for? This workflow automates the process of Wikipedia data extraction using the Bright Data Web Unlocker, parsing and cleaning the data, and then sending the results to a specified webhook URL for downstream processing, reporting, or integration. What problem is this workflow solving? Researchers who need structured information from Wikipedia pages regularly. Data Engineers building knowledge bases or enriching datasets with factual data. Digital Marketers or Content Writers automating fact-checking or content sourcing. Automation Enthusiasts who want to trigger external systems with rich context from Wikipedia. What this workflow does This workflow addresses the challenges of manually retrieving, structuring, and using data from Wikipedia at scale. Workflow Breakdown Trigger Type: Scheduled or Manual Purpose: Starts the workflow either on a fixed schedule (e.g., daily) or on-demand via a manual trigger or incoming webhook. Bright Data Wikipedia Scraping Tool Used: Bright Data Web Unlocker Action: Scrape the HTML content of one or multiple Wikipedia article URLs. Parse & Extract Structured Data The Basic LLM Chain node is responsible for producing a human readable content. Summarization Summarize the Wikipedia content by utilizing the Summarization Chain node. Send to Webhook Initiates a Webhook notification to the specified URL as part of the "Summary Webhook Notifier" node. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Set Wikipedia URL with Bright Data Zone node with the Wikipedia URL and Bright Data Zone. Update the Summary Webhook Notifier node with the Webhook endpoint of your choice. How to customize this workflow to your needs Update Wikipedia URL Replace with your own Wikipedia URL of your interest. Make sure to set the Wikipedia URL as part of the "Set Wikipedia URL with Bright Data Zone" node. Modify Data Extraction Logic Extract entire article content or just specific sections by extending the "LLM Data Extractor" node prompt. Extend AI Summarization Extract key bullet points or entities. Create short-form summaries by extending the "Concise Summary Generator" node. Extend Summary Webhook Notifier Send to Slack, Discord, Telegram, MS Teams via the Webhook notification mechanism. Connect to your internal database/API via the Webhook notification mechanism.
by Jimleuk
This n8n template demonstrates how to use AI to compose or "stitch" separate images together to generate a new image which retains the source assets and consistent style. Use cases are many: Try producing storyboard scenes with consistent characters, marketing material with existing product assets or trying on different articles on fashion! Good to know At time of writing, each image generated will cost $0.039 USD. See Gemini Pricing for updated info. The model used in this workflow is geo-restricted! If it says model not found, it may not be available in your country or region. How it works We'll import our required assets via our Cloud storage using the HTTP node. The images are then converted to base64 strings and aggregated so we can use it for our AI model. Gemini's image generation model is used which takes all 3 images and a prompt that we define. Our prompt instructs the model on how to compose the final image. Gemini generates a new image but uses the original 3 assets to do so. The consistency to the source images is very high and shows little signs of hallucinations! Gemini's output is base64 so we use a "Convert to file" node to convert the data to binary. The final binary image is then uploaded to Google Drive to complete the demonstration. How to use The manual trigger node is used as an example but feel free to replace this with other triggers such as webhook or even a form. Technically, you should be able to compose even more images but of course, the generation will take longer and cost more. Requirements Gemini account for LLM and Image generation Google drive for upload Customising this workflow AI Image editing can be used for many use-cases. Try a popular use-case such as virtual try-on for fashion or applying branding on editing image assets.