by Don Jayamaha Jr
📰 This AI-powered agent performs real-time sentiment analysis on Tesla (TSLA) news to support trading decisions. It aggregates headlines from 5 trusted sources and uses DeepSeek Chat to classify sentiment and generate structured summaries. This tool is a critical sub-agent in the broader Tesla Quant Trading AI Agent system. ⚠️ Not standalone — this agent is designed to be executed by the Tesla Quant Trading AI Agent. ⚙️ Requires: DeepSeek Chat API Key 🔌 Workflow Role This tool processes Tesla-related news and produces output like: { "sentiment": "bullish", "summary": "Tesla stock rallied today after strong delivery numbers and Cybertruck updates. Analysts remain optimistic.", "topHeadlines": [ "Tesla beats Q2 delivery forecast – Yahoo Finance", "Cybertruck ramps up in Texas – Electrek", "Berlin Gigafactory expands battery production – CleanTechnica" ] } Its output feeds directly into the master trading agent’s final trade report. 📰 News Sources Used This agent collects real-time headlines from: Google News (filtered by “Tesla” or “TSLA”) Yahoo Finance (TSLA-specific feed) Electrek (Tesla archive) CleanTechnica (Tesla sustainability news) TeslaNorth (app/product release updates) These five tools are always queried together to ensure market-wide signal coverage. 🤖 What the Agent Does Pulls headlines from all 5 Tesla-specific RSS feeds Uses DeepSeek Chat to: Analyze narrative tone (bullish / bearish / neutral) Identify macro/financial drivers Generate a 2–3 sentence summary Return top 3–5 headlines Outputs structured JSON for downstream use 🛠️ Setup Instructions 1. Install & Name Import this file and name it: Tesla_News_and_Sentiment_Analyst_Tool 2. Add DeepSeek API Credentials Go to: Credentials → Add New → DeepSeek API Save as: DeepSeek account 3. Internet Access Required Ensure RSS feeds can fetch live headlines Works best with a cloud-hosted n8n instance or tunnel-enabled local install 4. Must Be Triggered by Parent Triggered via Execute Workflow by the Tesla Quant Trading AI Agent Requires these inputs: message: optional query context sessionId: passed to maintain short-term memory across executions 🧠 Agent Architecture | Node Name | Function | | ---------------------------------- | ------------------------------------------------ | | DeepSeek Chat Model | Performs AI-based sentiment analysis | | Tesla News and Sentiment Analyst | Combines results, formats output in strict JSON | | Simple Memory | Stores session-level context (short-term memory) | | 5x RSS nodes | Aggregate Tesla news from trusted media outlets | 📌 Sticky Notes Included 🟢 Trigger from Parent Workflow – Executed only by main TSLA agent 🟠 News Feeds Overview – Lists and explains each of the 5 feeds 🧠 DeepSeek Chat Notes – Describes LLM behavior and parsing role 🔵 Short-Term Memory – Buffers sentiment context during user session 📘 Sentiment Analyst Agent – Summarizes key responsibilities 📎 Licensing & Attribution © 2025 Treasurium Capital Limited Company This architecture, workflow structure, and prompt design are licensed for educational and operational use only. Commercial resale or rebranding prohibited without authorization. 🔗 Creator: Don Jayamaha 🔗 Templates: https://n8n.io/creators/don-the-gem-dealer/ 🚀 Power your TSLA trading with AI-driven sentiment—built with DeepSeek Chat and 5 trusted news sources. This tool is required by the Tesla Quant Trading AI Agent.
by Jimleuk
This n8n template demonstrates one approach to customer authentication via chat agents. Unlike approaches where you have to authenticate users prior to interacting with the agent, this approach allows guest users to authenticate at any time during the session or not at all. Note about Security: this template is for illustration purposes only and requires much more work to be ready for production! How it works A conversational agent is used for this demonstration. The key component is the Redis node just after the chat trigger which acts as the session context. For guests, the session item is blank. for customers, the session item is populated with their customer profile. The agent is instructed to generate a unique login URL only for guests when appropriate or upon request. This login URL redirects the guest user to a simple n8n form also hosted in this template. The login URL has the current sessionID as a query parameter as the way to pass this data to the form. Once login is successful, the matching session item by sessionId is populated with the customer profile. The user can now return to the chat window. Back to the agent, now when the user sends their next message, the Redis node will pick up the session item and the customer profile associated with it. The system prompt is updated with this data which let's the agent know the user is now a customer. How to use You'll need to update the "auth URL" tool to match the URL of your n8n instance. Better yet, copy the production URL of your form from the trigger. Activate the workflow to turn on production mode which is required for this workflow. Implement the authentication logic in step 3. This could be sending the user and pass to a postgreSQL data for validation. Requirements OpenAI for LLM (feel free to swap to any provider) Redis for Cache/Sessions (again, feel free to swap this out for postgresql or other database) Customising this workflow Consider not populating the session item with the user data as it can become stale. Instead, just add the userId and instruct the agent to query using tools. Extend the Login URL idea by experimenting with signup URLs or single-use Urls.
by Oliver Bardenheier
🛠️Setup Guide 'Get OVH Invoices to Google Sheets' Author: Oliver Bardenheier Who is this for? This Workflow is for all users who have services (Domains, BareMetal, VPS, Cloud, etc.) with Provider OVH.com (European API) It automatically retrieves invoice data, -files and puts the Data in a Google Spreadsheet for further processing. What problem is this workflow solving? / use case Currently the invoices from OVH do not come as an attachment via mail, it is just a link. So, the receiver has to be logged in to the ovh account to download the file. Even more effort if one is using 2FA. This workflow retrieves all information through the oauth2 token. What this workflow does This Workflow automatically retrieves invoice data, -files from Your OVH.com account and puts the Data in a Google Spreadsheet for further processing. It also saves the invoice PDF to a certain (yearly) folder in Your Google Drive. Setup Make a copy of this Google Sheet Template Set the timeframe for the query to Your likings in "Query Latest OVH Invoices" You could set an email trigger before and make the frame only one day. Log into Your OVH Account and get Your Credentials here Authentication using oAuth2 Authorization Code "Login with OVHcloud SSO" You need to Authorize OVHcloud API console If this worked fine You'll see a green text: "Access Token Received" Head over to the OVH API Console to get Your Token. Set Up Header Auth in the HTTP nodes: Authentication = Generic Credential Type Generic Auth Type = Header Auth Header Auth = Your OVH Header Credentials: -- a.) In every API Call in the console You'll find a curl example, just take the data from the line including: -H "authorization: Bearer eyJhxxxxxxxxxxxxxxxxxxxxxxxxxxxxx......" -- b.) Create a new Credential in n8n for the header auth. Put in the 'name' Field: authorization Copy Your Token including Bearer in the value field: 'Bearer eyJhxxxxxxxxxxxxxxxxxxxxxxxxxxxxx......' How to customize this workflow to your needs You can put in a mail trigger that activates on every incoming invoice mail from OVH. Adjusting the timeframe to get invoices from a certain time period, or remove the time variables completely to get ALL invoices.
by Airtop
Automating Person Data Enrichment and CRM Update Use Case This automation enriches a person’s professional profile using their name and work email, scores them against an ICP (Ideal Customer Profile), and updates their record in HubSpot. It’s ideal for sales, marketing, and recruitment teams needing reliable contact insights. What This Automation Does This automation performs the following using the input parameters: Person name**: The full name of the individual. Work email**: The professional email address of the contact. Airtop Profile (connected to LinkedIn)**: An authenticated Airtop Profile used for LinkedIn-based enrichment. Hubspot object id**: The internal HubSpot ID for the contact to be updated. How It Works Initiates the workflow using a form or external trigger. Uses the name and email to extract and enrich the person’s data, including: LinkedIn profile and company page About section, job title, location ICP score, seniority level, AI interest, technical depth, connection and follower counts Formats and maps the enriched data. Pushes the updated data to HubSpot using the object ID. Setup Requirements Airtop API Key Airtop Profile logged in to LinkedIn. HubSpot access with object ID field for each contact to update. Next Steps Combine with Lead Generation**: Use as part of an end-to-end workflow that sources leads and enriches them in real time. Trigger from CRM**: Initiate this workflow when a new contact is added in HubSpot or another CRM. Customize Scoring Logic**: Tailor the ICP calculation to your team’s specific criteria. Read more about person data enrichment
by David Roberts
AI evaluation in n8n This is a template for n8n's evaluation feature. Evaluation is a technique for getting confidence that your AI workflow performs reliably, by running a test dataset containing different inputs through the workflow. By calculating a metric (score) for each input, you can see where the workflow is performing well and where it isn't. How it works This template shows how to calculate a workflow evaluation metric: whether a specific tool was called by an agent. We use an evaluation trigger to read in our dataset It is wired up in parallel with the regular trigger so that the workflow can be started from either one. More info We make sure that the agent outputs the list of tools that it used We then check whether the expected tool (from the dataset) is in that list Finally we pass this information back to n8n as a metric
by Daniel Shashko
How it Works Disclaimer: This template is for self-hosted n8n instances only. This workflow is designed for developers, data analysts, and automation enthusiasts seeking to automate personalized news collection and delivery. It seamlessly combines n8n, OpenAI (e.g., GPT-4.1), and Bright Data’s Model Context Protocol (MCP) to collect, extract, and email the latest global news headlines. On a schedule or via a manual trigger, the workflow prompts an AI agent to gather fresh news. The agent leverages context-aware memory and integrated MCP tools to conduct both search engine queries and direct web page scraping in real time, delivering more than just meta search results—it extracts actual on-page headlines and trusted links. Results are formatted and delivered automatically by email via your SMTP provider, requiring zero manual effort once configured. Who is this for? Developers, data engineers, or automation pros wanting an AI-powered, fully automated newsfeed Teams needing up-to-date news digests from trusted global sources Anyone self-hosting n8n who wishes to combine advanced LLMs with real-time web data Setup Steps Setup time: Approx. 15–30 minutes (n8n install, API configuration, node setup) Requirements: Self-hosted n8n instance OpenAI API key Bright Data MCP account credentials SMTP/email provider details Install the community MCP node (n8n-nodes-mcp) for n8n and set up Bright Data MCP access. Configure these nodes: Schedule Trigger: For automated delivery at your chosen interval. Edit Fields: To inject your AI news collection prompt. AI Agent: Connects to OpenAI and MCP, enabled with memory for context. OpenAI Chat Model: Connects via your OpenAI credentials. MCP Clients: Configure at least two—one for search (e.g. search_engine) and one for scraping (e.g. scrape_as_markdown). Send Email: Set up with recipient and SMTP information. Credentials must be entered into their respective nodes for successful execution. Customization Guidance Prompt Tweaks:** Refine your AI news prompt to target specific genres, regions, or sources, or broaden/narrow the coverage as needed. Tool Configuration:** Carefully define tool descriptions and parameters in MCP client nodes so the agent can pick the best tool for each step (e.g., only scrape real news sites). Delivery Settings:** Adjust email recipient(s) and SMTP details as needed. Workflow Enhancements:** Use sticky notes in n8n for extended documentation, alternate prompts, or troubleshooting tips. Run Frequency:** Set schedule as needed—from hourly to daily updates. Once configured, this workflow will automatically gather, extract, and email curated news headlines and links—no manual curation required!
by David Roberts
AI evaluation in n8n This is a template for n8n's evaluation feature. Evaluation is a technique for getting confidence that your AI workflow performs reliably, by running a test dataset containing different inputs through the workflow. By calculating a metric (score) for each input, you can see where the workflow is performing well and where it isn't. How it works This template shows how to calculate a workflow evaluation metric: whether an output matches an expected output (i.e. has the same meaning). The workflow takes questions about the causes of historical events and compares them with the reference answers in the dataset. We use an evaluation trigger to read in our dataset It is wired up in parallel with the regular chat trigger so that the workflow can be started from either one. More info If we're evaluating (i.e. the execution started from the evaluation trigger), we calculate the correctness metric using AI We pass this information back to n8n as a metric If we're not evaluating we avoid calculating the metric, to reduce cost
by Jimleuk
This n8n template demonstrates how to calculate the evaluation metric "Summarization" which in this scenario, measures the LLM's accuracy and faithfulness in producing summaries which are based on an incoming Youtube transcript. The scoring approach is adapted from https://cloud.google.com/vertex-ai/generative-ai/docs/models/metrics-templates#pointwise_summarization_quality How it works This evaluation works best for an AI summarization workflows. For our scoring, we simple compare the generated response to the original transcript. A key factor is to look out information in the response which is not mentioned in the documents. A high score indicates LLM adherence and alignment whereas a low score could signal inadequate prompt or model hallucination. Requirements n8n version 1.94+ Check out this Google Sheet for a sample data https://docs.google.com/spreadsheets/d/1YOnu2JJjlxd787AuYcg-wKbkjyjyZFgASYVV0jsij5Y/edit?usp=sharing
by Trey
This workflow will archive your Spotify Discover Weekly playlist to an archive playlist named "Discover Weekly Archive" which you must create yourself. If you want to change the name of the archive playlist, you can edit value2 in the "Find Archive Playlist" node. It is configured to run at 8am on Mondays, a conservative value in case you forgot to set your GENERIC_TIMEZONE environment variable (see the docs here). Special thanks to erin2722 for creating the Spotify node and harshil1712 for help with the workflow logic. To use this workflow, you'll need to: Create then select your credentials in each Spotify node Create the archive playlist yourself Optionally, you may choose to: Edit the archive playlist name in the "Find Archive Playlist" node Adjust the Cron node with an earlier time if you know GENERIC_TIMEZONE is set Setup an error workflow like this one to be notified if anything goes wrong
by Emmanuel Bernard
🎉 Do you want to master AI automation, so you can save time and build cool stuff? I’ve created a welcoming Skool community for non-technical yet resourceful learners. 👉🏻 Join the AI Atelier 👈🏻 Monitor Zalando product pricing and get notified if a Zalando product price falls under a limit you have defined. This n8n workflow lets you follow the evolution of the price of products you select. For each product, you define a minimal price. The workflow automatically scrapes the price for you on a daily basis. If the price falls under your minimal price settings, you receive a notification. This workflow is very easy to use. From a simple form, just paste the URL of the Zalando product you want to monitor and fill in the minimal price. Features Monitor Zalando Product price: follow the price evolution of your favorite Zalando products. Email notification: set a minimal price, if the product price falls below this limit, you get notified by email. Visual price evolution: get a graphical overview of the product pricing evolutions. Automated Daily check-up: this workflow automatically checks the price of your selected Zalando products on a daily basis. Set up Copy this workflow to your n8n interface. Create a new Google Spreadsheet, copy this template Setup your workflow with your Google credential, your email, and your copy of the Spreadsheet. Activate the Workflow and start pasting Zalando product URLs. I hope you will enjoy this workflow that is probably one of the simplest ways to monitor the pricing evolution of your favorite Zalando products. Feel free to contact me should you have any questions or suggestions. Created by the n8n.inja ✨ follow on X 📺 follow on YT
by Sherlockes
What does this template help with? Save the data of activities recorded and stored in Strava to a Google Sheets document. How it works: We have a Google Sheets spreadsheet where each row represents a Strava activity with the date, reference, distance, time, and elevation. Periodically, the workflow checks the latest activities in our Strava account to see if any are missing from the spreadsheet and adds them to the list. All fields must be properly formatted according to how they are stored in the Google Sheets spreadsheet. Set up instructions Complete the Set up credentials step when you first open the workflow. You'll need a Google Sheets and Strava account. In the 'activities' node, you must enter the name of the file and the sheet where you want to save the imported data. In the 'Strava' node, you must select the corresponding credential. You can adjust the format of dates, times, and distances according to your needs in the 'strava_last' node. The rest of the information is available at sherblog.es Template was created in n8n v1.72.1
by Jimleuk
> Note: This template requires a self-hosted community edition of n8n. Does not work on cloud. Try It Out This n8n template shows how to validate API requests with Auth0 Authorization tokens. Auth0 doesn't work with the standard JWT auth option because: 1) Auth0 tokens use the RS256 algorithm. 2) RS256 JWT credentials in n8n require the user to use private and public keys and not secret phrase. 3) Auth0 does not give you access to your Auth0 instance private keys. The solution is to handle JWT validation after the webhook is received using the code node. How it works There are 2 approaches to validate Auth0 tokens: using your application's JWKS file or using your signing cert. Both solutions uses the code node to access nodeJS libraries to verify the token. JWKS**: the JWK-RSA library is used to validate the application's JWKS URI hosted on Auth0 Signing Cert**: the application's signing cert is imported into the workflow and used to verify token. In both cases, when the token is found to be invalid, an error is thrown. However, as we can use error outputs for the code node, the error does not stop the workflow and instead is redirected to a 401 unauthorized webhook response. When token is validated, the webhook response is forwarded on the success branch and the token decoded payload is attached. How to use Follow the instructions as stated in each scenario's sticky notes. Modify the Auth0 details with that of your application and Auth0 instance. Requirements Self-hosted community edition of n8n Ability to install npm packages Auth0 application and some way to get either the JWK url or signing cert.