by Ferenc Erb
Overview An automation workflow that creates a complete REST API for digitally signing PDF documents using n8n webhooks. This service demonstrates how to implement secure document signing functionality through standardized API endpoints with file upload and download capabilities. Use Case This workflow is designed for developers and automation specialists who need to implement digital document signing. It's particularly useful for: Integrating PDF signing capabilities into existing document workflows API-based automation of signature processes Creating proof-of-concept implementations for document verification systems Learning n8n's webhook capabilities and file handling techniques Testing PDF signing in development environments before production implementation What This Workflow Does API-Based Document Management Exposes RESTful webhook endpoints for all document operations Handles multipart/form-data uploads for PDF documents Processes JSON payloads for signing configuration Provides download functionality for completed documents Digital Certificate Handling Uploads existing PFX/PKCS#12 digital certificates Generates new certificates with customizable attributes Securely manages certificate storage and access Associates certificates with signing operations Cryptographic PDF Signing Applies digital signatures using industry-standard cryptographic methods Embeds signature information within PDF document structure Validates document integrity through cryptographic verification Preserves original document while adding signature elements Webhook Integration System Routes different API methods to appropriate handlers Validates request payloads and file content Manages authentication through webhook paths Returns structured responses for integration with other systems Technical Architecture Components API Gateway: n8n webhook nodes that receive external requests Request Router: Switch nodes that direct operations based on method parameters Document Processor: Function nodes for PDF manipulation and verification Certificate Manager: Specialized nodes for cryptographic key operations Storage Interface: File operation nodes for document persistence Response Formatter: Nodes that structure API responses Integration Flow Client Request → Webhook Endpoint → Method Router → Processing Engine → Digital Signing → Storage → Response Generation → Client Response Setup Instructions Prerequisites n8n installation (minimum version 0.214.0) Node.js 14 or higher Required environment variable: NODE_FUNCTION_ALLOW_EXTERNAL: "node-forge,@signpdf/signpdf,@signpdf/signer-p12,@signpdf/placeholder-plain" Configuration Steps Import Workflow Import the workflow JSON into your n8n instance Activate the workflow to enable the webhooks Configure Storage Set the storage path variables in the workflow Ensure proper permissions on the storage directories Test API Endpoints Use the included test scripts to verify functionality Test PDF upload, certificate generation, and signing Integration Document the webhook URLs for integration with other systems Configure error handling according to your requirements Testing Methods Test the workflow functionality using various HTTP requests and JSON data: Upload PDF documents to the document processing endpoint Upload or generate digital certificates Execute PDF signing operations Download signed documents from the download endpoint Webhook Endpoints The workflow exposes two primary webhook endpoints that form a complete API for PDF digital signing operations: 1. Document Processing Endpoint (/webhook/docu-digi-sign) This endpoint handles all document and certificate operations: Method: Upload PDF HTTP: POST Content-Type: multipart/form-data Parameters: method, uploadType, fileName, fileData Method: Upload Certificate HTTP: POST Content-Type: multipart/form-data Parameters: method, uploadType, fileName, fileData Method: Generate Certificate HTTP: POST Content-Type: application/json Parameters: method, subjectCN, issuerCN, serialNumber, validFrom, validTo, password Method: Sign PDF HTTP: POST Content-Type: application/json Parameters: method, inputPdf, pfxFile, pfxPassword 2. Document Download Endpoint (/webhook/docu-download) This endpoint handles the retrieval of processed documents: Method: Download Signed PDF HTTP: GET Content-Type: application/json Parameters: method, fileType, fileName Key Workflow Sections The workflow is organized into logical sections with clear responsibilities: Request Processing**: Parses incoming webhook data Method Routing**: Directs requests to appropriate handlers Document Management**: Handles file operations and storage Cryptographic Operations**: Manages signing and certificate functions Response Formatting**: Structures and returns results
by inderjeet Bhambra
Who is this for? This workflow is designed for travel bloggers, content creators, social media managers, and anyone who wants to transform their travel photos into engaging written narratives. It's perfect for travelers looking to create compelling stories from their photo collections without spending hours crafting content manually, families wanting to document memorable trips, and digital nomads who need to produce travel content efficiently. What problem is this workflow solving? Converting travel photos into engaging stories is time-consuming and requires both creative writing skills and the ability to analyze visual content meaningfully. This workflow solves the challenge of: Transforming visual memories into compelling written narratives Organizing photos chronologically to create logical story flow Generating professional-quality travel content without writing expertise Analyzing photo content to extract meaningful themes and emotions Creating day-by-day structured narratives from unorganized photo collections Reducing the time spent on manual content creation for travel documentation What this workflow does This AI-powered photo storyteller takes your travel photos and automatically generates immersive, first-person travel narratives. The workflow: Accepts multiple photos through a webhook endpoint Uses OpenAI Vision API (GPT-4o) to analyze each photo's content, emotions, and themes Automatically organizes photos chronologically by date and timestamp Groups photos by travel days and extracts daily themes Leverages GPT-4.1 (minimum required) to craft engaging, first-person travel stories with creative day titles Generates structured narratives with sensory details, cultural observations, and emotional insights Outputs JSON formatted content ready for formatting Creates day-by-day story structure with memorable moments and reflective conclusions Setup Required Credentials: OpenAI API key configured in n8n for both Vision Analysis and Story Generation nodes Ensure you have sufficient OpenAI credits for image analysis and text generation Webhook Configuration: The workflow creates a webhook endpoint at /tripteller-upload Configure your photo upload interface to POST photos array to this endpoint Photos should be sent as base64 encoded data with filename and metadata Photo Requirements: Supported formats: Standard image formats (JPEG, PNG, etc.) Photos should include timestamp metadata for chronological organization Caution Do not upload all photos at once. Start with a small number of photos, like 5 at a time. How to customize this workflow to your needs Story Style Customization: Modify the system prompt in the "Generate Travel Story" node to adjust writing tone (nostalgic, adventurous, poetic, etc.) Customize the story structure by editing the output format requirements Add specific cultural or geographical context prompts for location-specific storytelling Photo Analysis Enhancement: Adjust the Vision Analysis node prompt to focus on specific elements (architecture, food, people, landscapes) Modify the grouping logic in the "Group Photos by Day" node for different time-based organization Add location extraction from EXIF data for geographical context Output Format Adjustment: Customize the final response structure in the "Format Final Response" node Add integration with publishing platforms (blog APIs, social media, etc.) Include additional metadata like location tags, travel duration, or trip statistics Performance Optimization: Adjust the execution timeout based on your typical photo volume Modify the parallel processing approach for large photo collections Add progress tracking for longer processing workflows
by Jimleuk
This n8n template shows you how to create an MCP server out of your existing n8n workflows. With this, any MCP client connected can get more done with powerful end-to-end workflows rather than just simple tools. Designing agent tools for outcome rather than utility has been a long recommended practice of mine and it applies well when it comes to building MCP servers; In gist, agents to be making the least amount of calls possible to complete a task. This is why n8n can be a great fit for MCP servers! This template connects your agent/MCP client (like Claude Desktop) to your existing workflows by allowing the AI to discover, manage and run these workflows indirectly. How it works An MCP trigger is used and attaches 4 custom workflow tools to discover and manage existing workflows to use and 1 custom workflow tool to execute them. We'll introduce an idea of "available" workflows which the agent is allowed to use. This will help limit and avoid some issues when trying to use every workflow such as clashes or non-production. The n8n node is a core node which taps into your n8n instance API and is able to retrieve all workflows or filter by tag. For our example, we've tagged the workflows we want to use with "mcp" and these are exposed through the tool "search workflows". Redis is used as our main memory for keeping track of which workflows are "available". The tools we have are "add Workflow", "remove workflow" and "list workflows". The agent should be able to manage this autonomously. Our approach to allow the agent to execute workflows is to use the Subworkflow trigger. The tricky part is figuring out the input schema for each but was eventually solved by pulling this information out of the workflow's template JSON and adding it as part of the "available" workflow's description. To pass parameters through the Subworkflow trigger, we can do so via the passthrough method - which is that incoming data is used when parameters are not explicitly set within the node. When running, the agent will not see the "available" workflows immediately but will need to discover them via "list" and "search". The human will need to make the agent aware that these workflows will be preferred when answering queries or completing tasks. How to use First, decide which workflows will be made visible to the MCP server. This example uses the tag of "mcp" but you can all workflows or filter in other ways. Next, ensure these workflows have Subworkflow triggers with input schema set. This is how the MCP server will run them. Set the MCP server to "active" which turns on production mode and makes available to production URL. Use this production URL in your MCP client. For Claude Desktop, see the instructions here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop. There is a small learning curve which will shape how you communicate with this MCP server so be patient and test. The MCP server will work better if there is a focused goal in mind ie. Research and report, rather than just a collection of unrelated tools. Requirements N8N API key to filter for selected workflows. N8N workflows with Subworkflow triggers! Redis for memory and tracking the "available" workflows. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow If your targeted workflows do not use the subworkflow trigger, it is possible to amend the executeTool to use HTTP requests for webhooks. Managing available workflows helps if you have many workflows where some may be too similar for the agent. If this isn't a problem for you however, feel free to remove the concept of "available" and let the agent discover and use all workflows!
by Ranjan Dailata
Disclaimer This template is only available on n8n self-hosted as it's making use of the community node for MCP Client. Who this is for? The Extract, Transform LinkedIn Data with Bright Data MCP Server & Google Gemini workflow is an automated solution that scrapes LinkedIn content via Bright Data MCP Server then transforms the response using a Gemini LLM. The final output is sent via webhook notification and also persisted on disk. This workflow is tailored for: Data Analysts : Who require structured LinkedIn datasets for analytics and reporting. Marketing and Sales Teams : Looking to enrich lead databases, track company updates, and identify market trends. Recruiters and Talent Acquisition Specialists : Who want to automate candidate sourcing and company research. AI Developers : Integrating real-time professional data into intelligent applications. Business Intelligence Teams : Needing current and comprehensive LinkedIn data to drive strategic decisions. What problem is this workflow solving? Gathering structured and meaningful information from the web is traditionally slow, manual, and error-prone. This workflow solves: Reliable web scraping using Bright Data MCP Server LinkedIn tools. LinkedIn person and company web scrapping with AI Agents setup with the Bright Data MCP Server tools. Data extraction and transformation with Google Gemini LLM. Persists the LinkedIn person and company info to disk. Performs a Webhook notification with the LinkedIn person and company info. What this workflow does? This n8n workflow performs the following steps: Trigger: Start manually. Input URL(s): Specify the LinkedIn person and company URL. Web Scraping (Bright Data): Use Bright Data's MCP Server, LinkedIn tools for the person and company data extract. Data Transformation & Aggregation: Uses the Google LLM for handling the data transformation. Store / Output: Save results into disk and also performs a Webhook notification. Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the LinkedIn URL person and company workflow. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Different Inputs: Instead of static URLs, accept URLs dynamically via webhook or form submissions. Data Extraction: Modify the LinkedIn Data Extractor node with the suitable prompt to format the data as you wish. Outputs: Update the Webhook endpoints to send the response to Slack channels, Airtable, Notion, CRM systems, etc.
by n8n Team
This workflow sends a message to a Discord channel when a new row is added or a row is updated in a Google Sheet. The message will send all data rows in the Google Sheet. Prerequisites Discord account and Discord credentials. Google account and Google credentials. How it works Using a code node, we can use the obtained Google Sheet data to create a custom message that will be sent to Discord. The message will be sent to the Discord channel specified in the Discord node. Setup This workflow requires that you set up a Discord webhook and have an existing Google Sheet with data. See how to set up a Discord webhook here.
by Agniva Mahata
How it Works: Trigger: The workflow is triggered by a webhook, initiated by an Airtable automation. This automation sends the Book or Chapter record ID and the desired action (e.g., "Generate Book Details," "Generate Chapters," "Generate Chapter Research," "Generate Chapter Content"). Action Routing: A "Switch" node directs the workflow based on the action query parameter received from the webhook. This determines which part of the book creation process will be executed. Data Retrieval: The workflow fetches the relevant book or chapter data from Airtable using the provided recordId. AI Processing: Book Details Generation: If the action is "Generate Book Details," an AI Agent (powered by a Large Language Model (LLM) like Google Gemini and the Perplexity search tool) researches the book idea. It focuses on crafting a compelling book description, identifying the target audience, and conducting general book research to maximize bestseller potential. The research brief is then saved back to Airtable. Chapter Generation: If the action is "Generate Chapters," an LLM generates 7-10 chapter titles and descriptions based on the book idea and previous research. A structured output parser ensures the chapter data is in the correct format. The chapters are then split into individual items and saved as separate records in the "Chapter" table in Airtable, linked to the main book record. Chapter Research Generation: If the action is "Generate Chapter Research," another AI Agent conducts in-depth research on a specific chapter, using the Perplexity search tool multiple times. It focuses on finding stories, case studies, historical events, and expert perspectives to make the chapter engaging and credible. The research is saved back to the "Chapter" record in Airtable. Chapter Content Generation: If the action is "Generate Chapter Content," an LLM writes the full content of the chapter, using the research gathered in the previous step, the overall book research, and the chapter description. The generated content is saved back to the "Chapter" record in Airtable. Airtable Updates: In each of the AI processing steps, the workflow updates the corresponding Airtable record (either "Book" or "Chapter") with the generated results (research, chapter details, or content) and sets the "Action" field back to "Idle." Set Up Steps: Airtable Setup (Estimated time: 10-15 minutes): Copy the Airtable base blueprint: https://airtable.com/appfkz4KUlKvOjtbp/shra78TlDfqLRdSfT. This will create the "Book" and "Chapter" tables with the necessary fields. In the "Book" table, create three Airtable Automations: Trigger: When a record matches conditions -> Action is Generate Book Details Action: Run a script. Use the following script: let autoRoute = input.config(); await fetch(autoRoute.webhookUrl + "?recordId=" + autoRoute.recordId + "&action=" + autoRoute.action); In the script action's configuration, add three "Input variables": webhookUrl (map it to your n8n webhook URL, obtained in the next step) recordId (map it to the Airtable record ID) action (map it to Action) Repeat this process to create two more automations in the "Book" table, identical except triggered when Action is Generate Chapters, respectively. In the "Chapter" table, create two Airtable Automations: Trigger: When a record matches conditions -> Action is Generate Chapter Research Action: Run a script (use the same script as above, with the same input variables). Create a second automation, identical except triggered when Action is Generate Chapter Content. n8n Setup (Estimated time: 15-20 minutes): Import the provided JSON workflow into n8n. Webhook Node: Copy the "Test URL" from the Webhook node. This is the webhookUrl you'll use in the Airtable automations. Important: Once you've tested and are ready to go live, switch to the "Production URL." Airtable Nodes: Configure all Airtable nodes (there are eight). You'll need to connect your Airtable account using OAuth 2. Select the correct Base ("Book Agency \[v1] Cobuild" or whatever you named it) and Table ("Book" or "Chapter") for each node. The field mappings are already defined in the template, but double-check them. LLM Nodes (Google Gemini & OpenAI): Connect your Google Gemini and OpenAI accounts to the respective LLM nodes. You'll need API keys for both. You may also configure different LLM Models. Perplexity Nodes Connect your Perplexity AI API to the Perplexity nodes. You'll need API keys for that. Activate the workflow. Testing (Estimated Time: 5-10 minutes): Go to your Airtable "Book" table. Create a New Record. Fill in the "Idea" field with a book concept. Change the "Action" field to "Generate Book Details". The Airtable automation should trigger, sending a request to your n8n webhook. Monitor the n8n execution log to see the workflow in action. Check the Airtable record to see if the "Research" field is populated. Repeat the testing for Generate Chapters, Generate Chapter Research and Generate Chapter Content.
by Ayoub
Who is this for? This workflow is ideal for developers, content creators, or customer support teams looking to automate text-to-speech conversion using OpenAI. What problem does this solve? It automates the process of converting text inputs into speech, reducing manual effort and enhancing productivity. What this workflow does: This workflow triggers when a text input is received via a webhook, converts it into audio using the OpenAI API, and sends the generated speech back through a webhook response. Setup: Ensure you have an OpenAI API key (you can get it from OpenAI website). Set up the webhook URL and parameters. Configure the OpenAI node with your API key (Create New Credentials). set up the responde to webhook node.
by Mal Chia
Who’s it for This workflow is perfect for HR teams, recruiters, or hiring managers who collect applicant information via a web form and want to automatically forward both candidate details and attached resumes into a dedicated Telegram channel or group. It streamlines manual email checks, speeding up review and collaboration. How it works On form submission: A Form Trigger node captures all applicant fields (name, age, WhatsApp number, education, desired role, availability date, expected salary, resume file, and additional comments). Date & Time: Formats the “fastest start date” into a human-readable string. Edit Fields: A Set node renames and reshapes incoming JSON into clear key/value pairs. If Have Resume: An If node routes submissions with an attached resume to one branch (sending both info and document) and submissions without a resume to a simpler info-only branch. Merge: Combines branches so both message types terminate in a single unified flow. Send a Resume & Send a Info: Two Telegram nodes post Markdown-formatted messages (and the PDF resume when available) to your specified Telegram chat. How to set up Install and enable the n8n-nodes-base.formTrigger and n8n-nodes-base.telegram community nodes (preview). Copy this JSON into your n8n instance (Workflow → Import from clipboard). Create environment variables for credentials: TELEGRAM_BOT_TOKEN TELEGRAM_CHAT_ID In each Telegram node, reference these variables instead of hard-coding ({{$env.TELEGRAM_BOT_TOKEN}}, {{$env.TELEGRAM_CHAT_ID}}). Requirements n8n version ≥ 0.200.0 Community nodes: Form Trigger, Telegram A Telegram bot with chat permissions A hosted form endpoint or embedded form at path /mmc-newjob How to customize the workflow Form fields: Edit the **Form Trigger node’s formFields.values to add or remove fields. Telegram formatting: Tweak captions under **Send a Resume and Send a Info to adjust the MarkdownV2 styling. Conditional logic: Modify the **If Have Resume node to branch on other criteria (e.g., education level). Styling: Update the customCss section in **Form Trigger to match your brand’s look. Good to know Community nodes may be in preview; test thoroughly before production. Webhook URLs change when you rename the workflow—revisit your form’s embed or webhook settings after renaming. Consider adding an Error Trigger node to capture failures and notify your team.
by Jimleuk
This n8n template allows you to use AI to generate logos or images which mimic visual styles of other logos or images. The model used to generate the images is Google's Imagen 3.0. With this template, users will be able to automate design and marketing tasks such as creating variants of existing designs, remixing existing assets to validate different styles and explore a range of designs which would have been otherwise too expensive and time-consming previously. How it works A form trigger is used to capture the source image to reference styles from and a prompt for the target image to generate. The source image is passed to Gemini 2.0 to be analysed and its visual style and tone extracted as a detailed description. This visual style description is then combined with the user's initial target image prompt. This final prompt is given to Imagen 3.0 to generate the images. A quick webpage is put together with the generated images to present back to the user. If the user provided an email address, a copy of this HTML page will be sent. How to use Ensure the workflow is live to share the form publicly. The source image must be accessible to your n8n instance - either a public image of the internet or within your network. For best results, select a source image which has strong visual identity as these will allow the LLM to better describe it. For your prompt, refer to the imagen prompt guide found here: https://ai.google.dev/gemini-api/docs/image-generation#imagen-prompt-guide Requirements Gemini for LLM and Imagen model. Cloudinary for image CDN. Gmail for email sending. Customising this workflow Feel free to swap any of these out for tools and services you prefer. Want to fully automate? Switch the form trigger for a webhook trigger!
by Belgacem Dhiflaoui
What Problem Does This Solve? This workflow automates the end-to-end process of capturing company information from Google Drive, storing it semantically in Pinecone, and interacting with users via an intelligent AI chatbot. It eliminates the need for manual customer service, lead tracking, and company information retrieval—offering a fully automated, intelligent engagement system. Perfect for teams that need to: Maintain accurate, AI-readable company knowledge bases Answer customer inquiries 24/7 using AI Automatically collect and log lead information Embed a chatbot into their website to assist potential customers Target Audience: Sales teams, business owners, marketing departments, customer support reps, startup founders, or anyone looking to automate AI-powered lead generation and customer engagement. What Does It Do? Part One – Knowledge Ingestion Monitors** a Google Drive folder for new .txt or document uploads. Downloads** the document and splits the content into manageable chunks using a recursive character splitter. Generates** embeddings via OpenAI. Stores** the embeddings in a Pinecone vector database under the Q&A namespace. Purpose:** This knowledge base is later used to answer business-related questions through AI. Part Two – AI Chatbot Engagement Listens** for incoming chat messages using n8n’s chatTrigger node. Activates an AI agent** (powered by GPT-4o) to respond to inquiries regarding business hours, services, products, or general company info. Retrieves knowledge** using a vector search tool connected to Pinecone (newCompany_q). Captures leads:** If a user shows interest, the AI collects and stores: Name Email Phone number Specific interest into a connected Google Sheet automatically. Key Features 🔄 Google Drive integration for real-time file processing 🧠 OpenAI embedding + Pinecone vector store for semantic memory 🤖 LangChain agent with tool-based reasoning 🗃️ Google Sheets integration for dynamic lead storage 💬 GPT-4o model for accurate, human-like conversation ⚙️ Modular design to expand into CRM, Notion, or email workflows 🌐 Website-ready chatbot endpoint 🧰 Setup Instructions Prerequisites: n8n instance (cloud or self-hosted) Google Drive account (for uploading company data) Pinecone account (for vector storage) OpenAI API key Google Sheets access with OAuth2 credentials 📦 Installation Steps 1. Import the Workflow Upload the JSON files into your n8n instance. 2. Configure Credentials In n8n > Credentials, connect: Google Drive OpenAI Pinecone Google Sheets **3. Set Pinecone Index & Namespace Example:** Index: comanyName Namespace: Q&A 4. Test the Flow Upload a sample .txt or pdf file to the monitored Drive folder. Send a message to the chatbot (e.g., "What are your opening hours?"). Check the Google Sheet for collected user info. How It Works (Behind the Scenes) Part 1 – Data Preparation: Company files are uploaded to Google Drive. File is detected, downloaded, and chunked. Embeddings are created using OpenAI. Data is stored in Pinecone for semantic retrieval. Part 2 – Chat Interaction: A chat message triggers the workflow via webhook. The AI agent interprets the intent and accesses company data via newCompany_q. If lead data is gathered, it is appended to a Google Sheet using the AI-parsed values. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Don Jayamaha Jr
📈 Get daily and on-demand Tesla (TSLA) trading signals via Telegram—powered by GPT-4.1 and real-time market data. This is the central AI supervisor that orchestrates seven sub-agents for technical analysis, price pattern recognition, and news sentiment. Reports are delivered in structured Telegram-ready HTML, optimized for traders seeking fast, intelligent decision-making signals. ⚠️ This master agent requires 7 connected sub-workflows to function. One of them, the News & Sentiment Agent, also requires a DeepSeek Chat API key for language processing. 🔌 Required Sub-Workflows You must download and publish the following workflows: Tesla Financial Market Data Analyst Tool Tesla News and Sentiment Analyst Tool (Requires DeepSeek Chat API Key) Tesla 15min Indicators Tool Tesla 1hour Indicators Tool Tesla 1day Indicators Tool Tesla 1hour & 1day Klines Tool Tesla Quant Technical Indicators Webhooks Tool (Requires Alpha Vantage Premium API Key) 📍 See all tools at: 🔗 https://n8n.io/creators/don-the-gem-dealer/ 🔍 What This Agent Does Listens to your trading query via Telegram Calls the Financial Analyst and News & Sentiment Analyst These agents aggregate: RSI, MACD, BBANDS, SMA, EMA, ADX Candlestick pattern + volume divergence analysis News summaries and sentiment scoring via DeepSeek Chat GPT-4.1 composes the final structured TSLA trade report with: Spot and leverage setups Signal rationale Confidence score Sentiment tag News summary 🧠 Output Example TSLA Trading Report (Daily Summary) Spot Trade • Action: Buy • Entry: 172.45 • TP: 182.00 • SL: 169.80 • Signal: RSI bounce + Bullish Engulfing • Sentiment: Neutral Leveraged Position • Position: Long • Leverage: 3x • TP: 186 • SL: 170 • Confidence: High (83/100) 📰 Top News • Tesla Model Y delivery surge – Electrek • Options market pricing in upside – Bloomberg • FSD delayed in Canada – TeslaNorth 🛠️ Setup Instructions 1. Import All 8 Workflows Ensure all sub-agents above are published in your n8n instance. 2. Create Your Telegram Bot Use @BotFather to generate the token and connect to the trigger/send nodes. 3. Connect OpenAI GPT-4.1 Add your OpenAI credentials for GPT-4.1 in the designated node. 4. Add DeepSeek Chat API Key Sign up at https://deepseek.com and insert your DeepSeek Chat credentials in the News Agent. 5. Add Alpha Vantage Premium API Key Sign up at https://www.alphavantage.co/premium/ Use HTTP Header Auth for webhook-based indicator fetchers. 6. Replace Telegram ID Update the placeholder <<replace your ID here>> with your actual Telegram numeric ID in the auth node. 📌 Included Sticky Notes ✅ Telegram Bot Setup ✅ Agent Routing & Memory ✅ Financial vs. Sentiment Trigger Flow ✅ Report Formatting (HTML) ✅ API Requirements (GPT-4.1, DeepSeek, Alpha Vantage) ✅ Troubleshooting & Licensing 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: LinkedIn – Don Jayamaha 🚀 Deploy the Tesla Quant Trading AI system with GPT-4.1, DeepSeek Chat, and Alpha Vantage Premium—right into Telegram. All 8 workflows are required. 🎥 Tesla Quant AI Agent – Live Demo Experience the power of the Tesla Quant Trading AI Agent in action.
by David Ashby
Complete MCP server exposing all Oura Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Oura Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Oura Tool tool with full error handling 📋 Available Operations (4 total) Every possible Oura Tool operation is included: 🔧 Profile (1 operations) • Get a profile 🔧 Summary (3 operations) • Get activity summary • Get readiness summary • Get sleep summary 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Oura Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Oura Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.