by mariskarthick
QuantumDefender AI is a next-generation intelligent cybersecurity assistant designed to harness the symbolic strength of quantum computingโs promise alongside cutting-edge AI capabilities. This sophisticated agent empowers SOC analysts, red teamers, and security researchers with rapid threat investigation, operational automation, and intelligent command executionโall driven by GPT-4 and integrated tools, accessible through Telegram or on any medium. ๐ Key Features: Expert-Level Cybersecurity Research & Analysis: Leverages powerful AI models to deliver clean, detailed, domain-specific insights across detection, remediation, and offensive security. Command & Control: Executes Linux shell commands, autonomous scripts, and system operations securely in isolated environments. Real-Time Web Intelligence: Utilizes integrated Langsearch API to provide timely internet research with contextual relevance. Calendar & Scheduling Automation: Manage Google Calendar events or any similar application(create, update, delete, retrieve) dynamically from chat. Multi-Tool Orchestration: Combines calculator functions, internet searches, command execution, and messaging for comprehensive operational support. Telegram-native Chatbot: Delivers an adaptive, memory-informed, and interactive conversational experience with immediate typing indicators and high responsiveness. Conversation & Session Management: Maintains context-aware, session-based memory to enable smooth, multi-turn dialogues with individual users. Sends โtypingโฆโ indicators during processing to ensure an interactive, user-friendly chat experience. Operates exclusively within Telegram, delivering rich, timely responses and leveraging all Telegram bot capabilities. Execution Intelligence & Safety: Fully autonomous in deciding which tools to invoke, how frequently, and in what sequence to fulfill user requests comprehensively and responsibly. Operates within a secure temporary folder environment to contain all command executions safely and avoid persistent or harmful side effects. Enforces strict safety protocols to avoid running malicious or destructive commands, maintaining ethical standards and compliance. Use Cases: Cybersecurity researchers and operators seeking an intelligent assistant to accelerate investigations and automate routine tasks. Red team professionals requiring on-the-fly command execution and information gathering integrated with tactical chat interactions. SOC teams aiming to augment their alert triage and incident handling workflows with AI-powered analysis and action. Anyone looking for a robust multi-tool AI chatbot integrated with real-world operational capabilities. Setup Requirements: OpenAI API key for GPT-4.1-nano language processing. Telegram Bot API credentials with proper webhook setup to receive and respond to messages. Google OAuth credentials for Calendar integration if calendar features are used. SSH access credentials for executing commands on remote hosts, if remote execution is enabled. Internet connectivity for the Langsearch web search API. Customization & Extensibility: The workflow is built modularly with n8nโs flexible node system. Users can extend it by adding more tools, integrating other services (ticketing, threat intel, scanning tools), or modifying interaction logic to suit specialized operational needs and environments. Created by Mariskarthick M Senior Security Analyst | Detection Engineer | Threat Hunter | Open-Source Enthusiast
by David Olusola
๐จ AI Image Editor with Form Upload + Telegram Delivery ๐ Whoโs it for? ๐ฅ This workflow is built for content creators, social media managers, designers, and agencies who need fast, AI-powered image editing without the hassle. Whether you're batch-editing for clients or spicing up personal projects, this tool gets it done โ effortlessly. What it does ๐ ๏ธ A seamless pipeline that: ๐ฅ Accepts uploads + prompts via a clean form โ๏ธ Saves images to Google Drive automatically ๐ง Edits images with OpenAIโs image API ๐ Converts results to downloadable PNGs ๐ฌ Delivers the final image instantly via Telegram Perfect for AI-enhanced workflows that need speed, structure, and simplicity. How it works โ๏ธ User Uploads: Fill a form with an image + editing prompt Cloud Save: Auto-upload to your Google Drive folder AI Editing: OpenAI processes the image with your prompt Convert & Format: Image saved as PNG Telegram Delivery: Final result sent straight to your chat ๐ฌ Youโll need โ ๐ OpenAI API key ๐ Google Drive OAuth2 setup ๐ค Telegram bot token & chat ID โ๏ธ n8n instance (self-hosted or cloud) Setup in 4 Easy Steps ๐ ๏ธ 1. Connect APIs Add OpenAI, Google Drive, and Telegram credentials to n8n Store keys securely (avoid hardcoding!) 2. Configure Settings Set Google Drive folder ID Add Telegram chat ID Tweak image size (default: 1024ร1024) 3. Deploy the Form Add a Webhook Trigger node Test with a sample image Share the form link with users ๐ฏ Fine-Tune Variables In the Set node, customize: ๐ Image size ๐ Folder path ๐ฒ Delivery options โฑ๏ธ Timeout duration Want to customize more? ๐๏ธ ๐ผ๏ธ Image Settings Change size (e.g. 512x512 or 2048x2048) Update the model (when new versions drop) ๐ Storage Auto-organize files by date/category Add dynamic file names using n8n expressions ๐ค Delivery Swap Telegram with Slack, email, Discord Add multiple delivery channels Include image prompt or metadata in messages ๐ Form Upgrades Add fields for advanced editing Validate file types (e.g. PNG/JPEG only) Show a progress bar for long edits โก Advanced Features Add error handling or retry flows Support batch editing Include approvals or watermarking before delivery โ ๏ธ Notes & Best Practices โ Check OpenAI credit balance ๐ผ๏ธ Test with different image sizes/types โฑ๏ธ Adjust timeout settings for larger files ๐ Always secure your API keys
by VKAPS IT
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. ๐ฏ How it works This workflow captures new lead information from a web form, enriches it with Apollo.io data, qualifies the lead using AI, andโif the lead is strongโautomatically sends a personalized outreach email via Gmail and logs the result in Google Sheets. ๐ ๏ธ Key Features ๐ฉ Lead form capture with validation ๐ Enrichment via Apollo API ๐ค Lead scoring using AI (LangChain + Groq) ๐ง Dynamic email generation & sending via Gmail ๐ Logging leads with job title & org into Google Sheets โ Conditional email sending (score โฅ 6 only) ๐งช Set up steps Estimated time: 15โ20 minutes Add your Apollo API Key to the HTTP Header credential (never hardcode!) Connect your Gmail account for sending emails Connect your Google Sheets account and set up the correct spreadsheet & sheet name Enable LangChain/Groq credentials for lead scoring and AI-generated emails Update the form endpoint to your live webhook if needed ๐ Sticky Notes Add the following mandatory sticky notes inside your workflow: FormTrigger Node: "Collects lead info via form. Ensure your form is connected to this endpoint." HTTP Request Node: "Enrich lead using Apollo.io API. Add your API key via header-based authentication." AI Agent (Lead Score): "Scores lead from 1-10 based on job title and industry match. Only leads with score โฅ 6 proceed." AI Agent (Email Composer): "Generates a concise, polite email using leadโs job title & company. Modify tone if needed." Google Sheets Append: "Logs enriched lead with job title, org, and LinkedIn URL. Customize sheet structure if needed." Gmail Node: "Sends personalized outreach email if lead passes score threshold. Uses AI-generated content." ๐ธ Free or Paid? Free โ No paid API services are required (Apollo has a free tier).
by Yang
Who is this for? This template is for sales teams, agencies, or local service providers who want to quickly generate cold outreach lists and automatically call local businesses with a Vapi AI assistant. Itโs perfect for automating cold calls from scraped local listings with no manual dialing or research. What problem is this workflow solving? Finding leads and initiating outreach calls can be time-consuming. This workflow automates the process: it scrapes business listings from Google Maps using Dumpling AI, extracts phone numbers, filters out incomplete data, formats the numbers, and uses Vapi to make outbound AI-powered calls. Every call is logged in Google Sheets for follow-up and tracking. What this workflow does Starts manually and pulls search queries (e.g., "plumbers in Austin") from Google Sheets. Sends each query to Dumpling AIโs Google Maps scraping endpoint. Splits the returned business data into individual leads. Extracts key info like business name, website, and phone number. Filters to only keep leads with valid phone numbers. Formats phone numbers for Vapi dialing (adds +1). Calls each business using Vapi AI. Logs each successful call in a Google Sheet. Setup Google Sheets Setup Create a sheet with business search queries in the first column (e.g., best+restaurants+in+Chicago) Make sure the tab name is set and authorized in your credentials. Connect your Google Sheets account in the Get Search Keywords from Google Sheets node. Dumpling AI Setup Go to dumplingai.com Generate an API Key and connect it as a header token in the Scrape Google Map Businesses using Dumpling AI node Vapi Setup Sign into Vapi and create an assistant Get your assistantId and phoneNumberId Insert these into the JSON payload of the Initiate Vapi AI Call to Business node Add your Vapi API key to the credentials section Call Logging Create another tab in your sheet (e.g., โleadsโ) with these headers: company name phone number website This will be used in the Log Called Business Info to Sheet node How to customize this workflow to your needs Modify the business search terms in your Google Sheet to target specific industries or locations. Add filters to exclude certain businesses based on ratings, keywords, or location. Update your Vapi assistant script to match the type of outreach or pitch youโre using. Add additional integrations (e.g., CRM logging, Slack notifications, follow-up emails). Change the trigger to run on a schedule or webhook instead of manually. Nodes and Functions Breakdown Start Workflow Manually: Initiates the automation manually for testing or controlled runs. Get Search Keywords from Google Sheets: Reads search phrases from the spreadsheet. Scrape Google Map Businesses using Dumpling AI: Sends each search query to Dumpling AI and receives matching local business data. Split Each Business Result: Breaks the returned array of businesses into individual records for processing. Extract Business Name, Phone and website: Extracts title, phone, and website from each business record. Filter Valid Phone Numbers Only: Ensures only entries with a phone number move forward. Format Phone Number for Calling: Adds a +1 country code and strips non-numeric characters. Initiate Vapi AI Call to Business: Uses the business name and number to initiate a Vapi AI outbound call. Log Called Business Info to Sheet: Appends business details into a Google Sheet for tracking. Notes You must have valid API keys and authorized connections for Dumpling AI, Google Sheets, and Vapi. Make sure to handle API rate limits if you're running the workflow on large datasets. This workflow is optimized for US-based leads (+1 country code); adjust the formatting node if calling internationally.
by Custom Workflows AI
Introduction This workflow offers a streamlined solution for uploading multiple files to a GitHub repository simultaneously using GitHub's REST API. It addresses a significant limitation of n8n's native GitHub node, which only supports single-file uploads at a time. By leveraging GitHub's Git Data API, this workflow creates a new Git tree containing multiple files, commits this tree, and updates the target branchโall in a single automated process. The workflow is particularly valuable for automation scenarios that require batch file operations, such as deploying website updates, publishing documentation, or maintaining configuration files across repositories. It eliminates the need for multiple separate API calls when working with multiple files, making your automation more efficient and less prone to partial update issues. By abstracting the complexities of GitHub's Git Data API into a reusable workflow, it provides a practical solution for developers, content managers, and DevOps professionals who need to programmatically manage repository content at scale. Who is this for? This workflow is designed for: Developers and DevOps engineers who need to automate file updates in GitHub repositories Content managers who regularly publish multiple files to GitHub-hosted websites or documentation Automation specialists looking to integrate GitHub operations into larger workflows Teams using n8n for CI/CD processes who need to push code or configuration changes Users should have basic familiarity with GitHub concepts (repositories, branches, commits) and should be comfortable obtaining and using GitHub Personal Access Tokens. While the workflow handles the API complexity, users should understand the fundamentals of version control to effectively utilize and customize it. What problem is this workflow solving? This workflow addresses several key challenges: Limited batch operations: n8n's native GitHub node only supports uploading one file at a time, making multi-file operations cumbersome and inefficient. API complexity: GitHub's Git Data API requires multiple sequential calls with interdependent data to create commits with multiple files, which is complex to implement manually. Automation bottlenecks: Without this workflow, automating multi-file updates would require either multiple separate API calls (risking partial updates) or custom scripting outside of n8n. Consistency issues: When files need to be updated together (e.g., code and corresponding documentation), this workflow ensures they're committed in a single atomic operation. By solving these issues, the workflow enables reliable, atomic updates of multiple files, maintaining repository consistency and simplifying automation processes. What this workflow does Overview This workflow uses GitHub's REST API to push multiple files to a repository in a single operation. It follows Git's internal model by: Retrieving the current state of the repository Creating a new tree with the files to be added or updated Creating a new commit with this tree Updating the branch reference to point to the new commit Process Initialization: The workflow starts with a manual trigger and sets up GitHub credentials and repository information. File Content Definition: Two "Set" nodes define the content for the files to be uploaded. Repository State Retrieval: The workflow fetches the latest commit SHA for the specified branch It then retrieves the base tree SHA from this commit Tree Creation: A new Git tree is created that includes both files (file1.txt and file2.txt), specifying their paths and content. Commit Creation: A new commit is created with the specified commit message, referencing the new tree and the parent commit. Branch Update: Finally, the branch reference is updated to point to the new commit, making the changes visible in the repository. Setup To use this workflow: Import the workflow: Download the workflow JSON and import it into your n8n instance. Create a GitHub Personal Access Token: Go to GitHub Settings โ Developer Settings โ Personal Access Tokens โ Fine-grained tokens Create a new token with "Contents" permission (Read and write) for your target repository Configure the workflow: Update the "Set Github Info" node with: Your GitHub Personal Access Token Your GitHub username Your repository name The target branch (default is "main") A commit message Define file content: Modify the "File 1" and "File 2" nodes with the content you want to upload Adjust file paths if needed: In the "Create new tree" node, update the file paths if you want to change where the files are stored in the repository Save and run the workflow: Click "Test workflow" to execute the process. How to customize this workflow to your needs This workflow can be adapted in several ways: Add more files: Create additional "Set" nodes for more file content In the "Create new tree" node, add more tree entries following the same pattern (path, mode, type, content) Change file locations: Modify the "path" parameters in the "Create new tree" node to place files in different directories Dynamic file content: Replace the static content in the "File" nodes with data from other sources Use previous nodes or HTTP requests to generate file content dynamically Conditional file updates: Add IF nodes to determine which files should be updated based on certain conditions Create separate branches in your workflow for different update scenarios Scheduled updates: Replace the manual trigger with a Schedule node to run the workflow at specific intervals Combine with other triggers like Webhook or database events to push files when certain events occur Error handling: Add Error Trigger nodes to handle potential API failures Implement notification nodes to alert you of successful pushes or failures
by Oneclick AI Squad
This n8n workflow automates the process of scraping LinkedIn profiles using the Apify platform and organizing the extracted data into Google Sheets for easy analysis and follow-up. Use Cases Lead Generation**: Extract contact information and professional details from LinkedIn profiles Recruitment**: Gather candidate information for talent acquisition Market Research**: Analyze professional networks and industry connections Sales Prospecting**: Build targeted prospect lists with detailed professional information How It Works 1. Workflow Initialization & Input Webhook Start Scraper**: Triggers the entire scraping workflow Read LinkedIn URLs**: Retrieves LinkedIn profile URLs from Google Sheets Schedule Scraper Trigger**: Sets up automated scheduling for regular scraping 2. Data Processing & Extraction Data Formatting**: Prepares and structures the LinkedIn URLs for processing Fetch Profile Data**: Makes HTTP requests to Apify API with profile URLs Run Scraper Actor**: Executes the Apify LinkedIn scraper actor Get Scraped Results**: Retrieves the extracted profile data from Apify 3. Data Storage & Completion Save to Google Sheets**: Stores the scraped profile data in organized spreadsheet format Update Progress Tracker**: Updates workflow status and progress tracking Process Complete Wait**: Ensures all operations finish before final steps Send Success Notification**: Alerts users when scraping is successfully completed Requirements Apify Account Active Apify account with sufficient credits API token for authentication Access to LinkedIn Profile Scraper actor Google Sheets Google account with Sheets access Properly formatted input sheet with LinkedIn URLs Credentials configured in n8n n8n Setup HTTP Request node credentials for Apify Google Sheets node credentials Webhook endpoint configured How to Use Step 1: Prepare Your Data Create a Google Sheet with LinkedIn profile URLs Ensure the sheet has a column named 'linkedin_url' Add any additional columns for metadata (name, company, etc.) Step 2: Configure Credentials Set up Apify API credentials in n8n Configure Google Sheets authentication Update webhook endpoint URL Step 3: Customize Settings Adjust scraping parameters in the Apify node Modify data fields to extract based on your needs Set up notification preferences Step 4: Execute Workflow Trigger via webhook or manual execution Monitor progress through the workflow Check Google Sheets for scraped data Review completion notifications Good to Know Rate Limits**: LinkedIn scraping is subject to rate limits. The workflow includes delays to respect these limits. Data Quality**: Results depend on profile visibility and LinkedIn's anti-scraping measures. Costs**: Apify charges based on compute units used. Monitor your usage to control costs. Compliance**: Ensure your scraping activities comply with LinkedIn's Terms of Service and applicable laws. Customizing This Workflow Enhanced Data Processing Add data enrichment steps to append additional information Implement duplicate detection and merge logic Create data validation rules for quality control Advanced Notifications Set up Slack or email alerts for different scenarios Create detailed reports with scraping statistics Implement error recovery mechanisms Integration Options Connect to CRM systems for automatic lead creation Integrate with marketing automation platforms Export data to analytics tools for further analysis Troubleshooting Common Issues Apify Actor Failures**: Check API limits and actor status Google Sheets Errors**: Verify permissions and sheet structure Rate Limiting**: Implement longer delays between requests Data Quality Issues**: Review scraping parameters and target profiles Best Practices Test with small batches before scaling up Monitor Apify credit usage regularly Keep backup copies of your data Regular validation of scraped information accuracy
by Niranjan G
Automated GitHub Scanner for Exposed AWS IAM Keys Overview This n8n workflow automatically scans GitHub for exposed AWS IAM access keys associated with your AWS account, helping security teams quickly identify and respond to potential security breaches. When compromised keys are found, the workflow generates detailed security reports and sends Slack notifications with actionable remediation steps. ๐ Key Features Automated AWS IAM Key Scanning**: Regularly checks for exposed AWS access keys on GitHub Real-time Security Alerts**: Sends immediate Slack notifications when compromised keys are detected Comprehensive Security Reports**: Generates detailed reports with exposure information and risk assessment Actionable Remediation Steps**: Provides clear instructions for securing compromised credentials Continuous Monitoring**: Maintains ongoing surveillance of your AWS environment ๐ Workflow Steps List AWS Users: Retrieves all users from your AWS account Split Users for Processing: Processes each user individually Get User Access Keys: Retrieves access keys for each user Filter Active Keys Only: Focuses only on currently active access keys Search GitHub for Exposed Keys: Scans GitHub repositories for exposed access keys Aggregate Search Results: Consolidates and deduplicates search findings Check For Compromised Keys: Determines if any keys have been exposed Generate Security Report: Creates detailed security reports for compromised keys Extract AWS Usernames: Extracts usernames from AWS response for notification Format Slack Alert: Prepares comprehensive Slack notifications Send Slack Notification: Delivers alerts with actionable information Continue Scanning: Maintains continuous monitoring cycle ๐ ๏ธ Setup Requirements Prerequisites Active n8n instance AWS account with IAM permissions GitHub account/token for searching repositories Slack workspace for notifications Required Credentials AWS Credentials: IAM user with permissions to list users and access keys Access Key ID and Secret Access Key GitHub Credentials: Personal Access Token with search permissions Slack Credentials: Webhook URL for your notification channel โ๏ธ Configuration AWS Configuration: Configure the "List AWS Users" node with your AWS credentials Ensure proper IAM permissions for listing users and access keys GitHub Configuration: Set up the "Search GitHub for Exposed Keys" node with your GitHub token Adjust search parameters if needed Slack Configuration: Configure the Slack node with your webhook URL Customize notification format if desired ๐ Usage Running the Workflow Manual Execution: Click "Execute Workflow" to run an immediate scan Scheduled Execution: Set up a schedule to run periodic scans (recommended daily or weekly) Repository Compatibility This workflow is compatible with both public and private GitHub repositories to which you have access. It will scan all repositories you have permission to view based on your GitHub credentials. Handling Alerts When a compromised key is detected: Review the Slack notification for details about the exposure Follow the recommended remediation steps: Deactivate the compromised key immediately Create a new key if needed Investigate the exposure source Update any services using the compromised key โ ๏ธ Disclaimer This workflow template is provided for reference purposes only to demonstrate how to automate AWS IAM key exposure scanning. Please note: The scanning process may produce false positives as it only matches potential AWS access key patterns Always verify any reported exposures manually before taking action Disabling or deleting access keys without proper verification could have significant negative impacts on your environment Understand which systems and applications rely on identified access keys before deactivating them This template should be customized to fit your specific environment and security policies IMPORTANT: Use this workflow with caution and only after thoroughly understanding your AWS environment. The authors of this template are not responsible for any disruptions or damages resulting from its use. ๐ Security Considerations This workflow requires access to sensitive AWS credentials Store all credentials securely within n8n Review and rotate access keys regularly ๐ Customization Options Adjust GitHub search parameters for more targeted scanning Customize Slack notification format and content Modify security report generation for your specific needs Integrate with additional notification channels (email, MS Teams, etc.) Optional: Enabling Interactive Slack Buttons The Slack Block Kit notification format supports interactive buttons that can be implemented if you want to perform actions directly from Slack: Disable Key: This button can be configured to automatically disable the compromised AWS IAM access key View Details: This button can be set up to show additional information about the exposure Acknowledge: This button can be used to mark the alert as acknowledged To make these buttons functional: Set up a Slack Socket Mode App: Create a Slack app in the Slack API Console Enable Socket Mode and Interactive Components Subscribe to the block_actions event to capture button clicks Create an n8n Webhook Endpoint: Add a new webhook node to receive Slack button click events Create separate workflows for each button action Implement AWS Key Disabling: For the "Disable Key" button, create a workflow that uses the n8n HTTP Request node to call the AWS IAM UpdateAccessKey API Example HTTP request that can be implemented in n8n: Method: POST URL: https://iam.amazonaws.com/ Query Parameters: Action: UpdateAccessKey AccessKeyId: AKIAIOSFODNN7EXAMPLE Status: Inactive UserName: {{$json.username}} Version: 2010-05-08 Update the Slack Message Format: Modify the Format Slack Alert node to include your webhook URL in the button action values Add callback_id and action_id values to identify which button was clicked This implementation allows for immediate response to security incidents directly from the Slack interface, reducing response time and improving security posture.
by Jimleuk
This n8n workflow demonstrates how to automate oftern time-consuming form filling tasks in the early stages of the tendering process; the Request for Proposal document or "RFP". It does this by utilising a company's knowledgebase to generating question-and-answer pairs using Large Language Models. How it works A buyer's RFP is submitted to the workflow as a digital document that can be parsed. Our first AI agent scans and extracts all questions from the document into list form. The supplier sets up an OpenAI assistant prior loaded with company brand, marketing and technical documents. The workflow loops through each of the buyer's questions and poses these to the OpenAI assistant. The assistant's answers are captured until all questions are satisified and are then exported into a new document for review. A sales team member is then able to use this document to respond quickly to the RFP before their competitors. Example Webhook Request curl --location 'https://<n8n_webhook_url>' \ --form 'id="RFP001"' \ --form 'title="BlueChip Travel and StarBus Web Services"' \ --form 'reply_to="jim@example.com"' \ --form 'data=@"k9pnbALxX/RFP Questionnaire.pdf"' Requirements An OpenAI account to use AI services. Customising the workflow OpenAI assistants is only one approach to hosting a company knowledgebase for AI to use. Exploring different solutions such as building your own RAG-powered database can sometimes yield better results in terms of control of how the data is managed and cost.
by Jay Hartley
What this template does This workflow will collect order data as it is produced, then send a summary email of all orders at the end of every day, formatted in a table. It receives new orders via webhook and stores in Airtable. At 7PM every day, it sends a summary email with the day's orders in a HTML table Setup: Instructions Video Create a new table in Airtable and give it a field time with type date, orderID with type number, and orderPrice also with type number. Create a new access token if you haven't already at https://airtable.com/create/tokens/new. Make sure to give the token the scopes data.records:read, data.records:write, schema.bases:read and access to whichever table you choose to store the orders. A pop-up window appears with the token. Use this token to make Create New Credential > Access Token for Airtable in the Store Order and Airtable Get Today's Orders nodes. Create access credentials for your Gmail as described here: https://developers.google.com/workspace/guides/create-credentials. Use the credentials from your client_secret.json in the Send to Gmail node. In the Store Order node, change Base and Table to the database and table in your Airtable account you wish to use to store orders. Make sure to use these same values in the Airtable Get Today's Orders node. Every time an order is created in your system, send a POST request to Webhook from your order software. Each request must contain a single order containing fields 'orderID' and 'orderPrice' (or, edit Set Order Fields to select which incoming fields you wish to save) Change the schedule time for sending email from Everyday at 7PM to whichever time you choose. Test: Activate the workflow. From the node Webhook, copy Production URL Send the following CURL request to the URL given to you: curl -X POST -H "Content-Type: application/json" -d '{"orderID": 12345, "orderPrice": 99.99}' YOUR_URL_HERE It should say Node executed successfully. Now check your Airtable and confirm the order was stored in the right place.
by Don Jayamaha Jr
A short-term technical analysis agent for 15-minute candles on Binance Spot Market pairs. Calculates and interprets key trading indicators (RSI, MACD, BBANDS, ADX, SMA/EMA) and returns structured summaries, optimized for Telegram or downstream AI trading agents. This tool is designed to be triggered by another workflow (such as the Binance SM Financial Analyst Tool or Binance Quant AI Agent) and is not intended for standalone use. ๐ง Key Features โฑ๏ธ Uses 15-minute kline data (last 100 candles) ๐ Calculates: RSI, MACD, Bollinger Bands, SMA/EMA, ADX ๐ง Interprets numeric data using GPT-4.1-mini ๐ค Outputs concise, formatted analysis like: โข RSI: 72 โ Overbought โข MACD: Cross Up โข BB: Expanding โข ADX: 34 โ Strong Trend ๐ง AI Agent Purpose > You are a short-term analysis tool for spotting volatility, early breakouts, and scalping setups. Used by higher agents to determine: Entry/exit precision Momentum shifts Scalping opportunities โ๏ธ How it Works Triggered externally by another workflow Accepts input: { "message": "BTCUSDT", "sessionId": "123456789" } Sends POST request to backend endpoint: https://treasurium.app.n8n.cloud/webhook/15m-indicators Fetches last 100 candles and calculates indicators Passes data to GPT for interpretation Returns summary with indicator tags for human readability ๐ Dependencies This tool is triggered by: โ Binance SM Financial Analyst Tool โ Binance Spot Market Quant AI Agent ๐ Setup Instructions Import into your n8n instance Make sure /15m-indicators webhook is active and calculates indicators correctly Connect your OpenAI GPT-4.1-mini credentials Trigger from upstream agent with Binance symbol and session ID Ensure all external calls (to Binance + webhook) are working ๐งช Example Use Cases | Use Case | Result | | ------------------------------------- | --------------------------------------- | | Short-term trade decision for ETHUSDT | Receives 15m signal indicators summary | | Input from Financial Analyst Tool | Returns real-time volatility snapshot | | Telegram bot asks for โDOGE updateโ | Returns momentum indicators in 15m view | ๐ฅ Watch Tutorial: ๐งพ Licensing & Attribution ยฉ 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding or resale permitted. ๐ For support: Don Jayamaha โ LinkedIn
by Don Jayamaha Jr
๐ Evaluate Tesla (TSLA) price action and market structure on the 1-hour timeframe using 6 real-time indicators. This sub-agent is designed to feed mid-term technical insights into the Tesla Financial Market Data Analyst Tool. It uses GPT-4.1 to interpret Alpha Vantage indicator data delivered via secure webhooks. โ ๏ธ This workflow is not standalone and is executed via Execute Workflow. ๐ Requires: Tesla Quant Technical Indicators Webhooks Tool Alpha Vantage Premium API Key ๐ง Connected Indicators This tool fetches and analyzes the latest 20 datapoints for: RSI (Relative Strength Index)** MACD (Moving Average Convergence Divergence)** BBANDS (Bollinger Bands)** SMA (Simple Moving Average)** EMA (Exponential Moving Average)** ADX (Average Directional Index)** ๐ Sample Output { "summary": "TSLA is gaining strength on the 1-hour chart. RSI is rising, MACD has crossed bullish, and BBANDS are widening.", "timeframe": "1h", "indicators": { "RSI": 62.1, "BBANDS": { "upper": 176.90, "lower": 169.70, "middle": 173.30, "close": 176.30 }, "SMA": 174.20, "EMA": 175.60, "ADX": 27.5, "MACD": { "macd": 0.84, "signal": 0.65, "histogram": 0.19 } } } ๐ง Agent Components | Component | Role | | ------------------------------ | -------------------------------------------------- | | 1hour Data | Pulls Alpha Vantage indicator data via webhook | | Tesla 1hour Indicators Agent | Interprets signals using structured GPT-4.1 prompt | | OpenAI Chat Model | GPT-4.1 LLM performs analysis | | Simple Memory | Maintains session context | ๐ ๏ธ Setup Instructions Import Workflow into n8n Name it: Tesla_1hour_Indicators_Tool Install the Webhook Fetcher Tool ๐ Required: Tesla_Quant_Technical_Indicators_Webhooks_Tool This agent expects webhook /1hourData to return pre-cleaned data Add Credentials Alpha Vantage Premium API Key (via HTTP Query Auth) OpenAI GPT-4.1 credentials Configure for Sub-Agent Use Triggered only via Execute Workflow from: ๐ Tesla Financial Market Data Analyst Tool Inputs: message (optional) sessionId (required for memory linkage) ๐ Sticky Notes Overview ๐ข Trigger Setup โ Activated only by the parent agent ๐ 1h Webhook Fetcher โ Calls Alpha Vantage via secured endpoint ๐ง AI Agent Summary โ Interprets trend/momentum from indicator data ๐ GPT Model Notes โ GPT-4.1 parses and explains technical alignment ๐ Documentation Sticky โ Embedded in canvas with full walkthrough ๐ Licensing & Support ยฉ 2025 Treasurium Capital Limited Company This tool is part of a proprietary multi-agent AI architecture. No commercial reuse or redistribution permitted. ๐ Author: Don Jayamaha ๐ Templates: https://n8n.io/creators/don-the-gem-dealer/ ๐ Detect TSLA trend shifts and validate setups with 1-hour technical clarityโpowered by Alpha Vantage + GPT-4.1. This tool is required by the Tesla Financial Market Data Analyst Tool.
by Akash Kankariya
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. ๐ฏ Overview This n8n workflow template automates the process of monitoring Instagram comments and sending predefined responses based on specific comment keywords. It integrates Instagram's Graph API with Google Sheets to manage comment responses and maintains an interaction log for customer relationship management (CRM) purposes. ๐ง Workflow Components The workflow consists of 9 main nodes organized into two primary sections: ๐ก Section 1: Webhook Verification โ Get Verification (Webhook node) ๐ Respond to Verification Message (Respond to Webhook node) ๐ค Section 2: Auto Comment Response ๐ฌ Insta Update (Webhook node) โ Check if update is of comment? (Switch node) ๐ค Comment if of other user (If node) ๐ Comment List (Google Sheets node) ๐ฌ Send Message for Comment (HTTP Request node) ๐ Add Interaction in Sheet (CRM) (Google Sheets node) ๐ ๏ธ Prerequisites and Setup Requirements 1. ๐ต Meta/Facebook Developer Setup ๐ฑ Create Facebook App > ๐ Action Items: > - [ ] Navigate to Facebook Developers > - [ ] Click "Create App" and select "Business" type > - [ ] Configure the following products: > - โ Instagram Graph API > - โ Facebook Login for Business > - โ Webhooks ๐ Required Permissions Configure the following permissions in your Meta app: | instagram_basic | ๐ Read Instagram account profile info and media | instagram_manage_comments | ๐ฌ Create, delete, and manage comments | instagram_manage_messages | ๐ค Send and receive Instagram messages | pages_show_list | ๐ Access connected Facebook pages ๐ซ Access Token Generation > โ ๏ธ Important Setup:+ > - [ ] Use Facebook's Graph API Explorer > - [ ] Generate a User Access Token with required permissions > - [ ] โก Important: Tokens expire periodically and need refreshing 2. ๐ Webhook Configuration ๐ Setup Webhook URL > ๐ Configuration Checklist: > - [ ] In Meta App Dashboard, navigate to Products โ Webhooks > - [ ] Subscribe to Instagram object > - [ ] Configure webhook URL: your-n8n-domain/webhook/instagram > - [ ] Set verification token (use "test" or create secure token) > - [ ] Select webhook fields: > - โ comments - For comment notifications > - โ messages - For DM notifications (if needed) โ Webhook Verification Process The workflow handles Meta's webhook verification automatically: ๐ก Meta sends GET request with hub.challenge parameter ๐ Workflow responds with the challenge value to confirm subscription 3. ๐ Google Sheets Setup Example - https://docs.google.com/spreadsheets/d/1ONPKJZOpQTSxbasVcCB7oBjbZcCyAm9gZ-UNPoXM21A/edit?usp=sharing ๐ Create Response Management Sheet Set up a Google Sheets document with the following structure: ๐ Sheet 1 - Comment Responses: | Column | Description | Example | |--------|-------------|---------| | ๐ฌ Comment | Trigger keywords | "auto", "info", "help" | | ๐ Message | Corresponding response message | "Thanks for your comment! We'll get back to you soon." | ๐ Sheet 2 - Interaction Log: | Column | Description | Purpose | |--------|-------------|---------| | โฐ Time | Timestamp of interaction | Track when interactions occur | | ๐ User Id | Instagram user ID | Identify unique users | | ๐ค Username | Instagram username | Human-readable identification | | ๐ Note | Additional notes or error messages | Debugging and analytics | ๐ง Built By - akash@codescale.tech