by Julian Ivanov
How it works This workflow automates the transformation of standard product images into professional product photography featuring human models It uses AI to analyze product images, create tailored photography prompts, and generate high-quality enhanced versions Set up steps You'll need an OpenAI API key and access to gpt-image-1 (verify your organization) Set up a Google Sheets spreadsheet with columns: Image-URL, Prompt, Output Create a Google Drive folder to store the generated images Requirements: OpenAI API access (for image generation and analysis) Google Sheets and Google Drive accounts Basic product images (URLs) as input The spreadsheet must contain a column named "Image-URL" with links to the product images This workflow automatically: Reads product image URLs from your Google Sheet Downloads the images for processing Analyzes each image to understand what product it contains Creates specialized photography prompts ensuring each product is shown with a human model Generates professional product photography using OpenAI's image generation capabilities Uploads results to Google Drive and updates your spreadsheet with links Extra: You can also use the included simple image generation workflow to directly create images via prompt without product image input. This option lets you quickly generate images through the OpenAI API using just text prompts
by darrell_tw
How it works Receive a chat input as an image prompt. Call OpenAI's gpt-image-1 API to generate an image. Split the returned images and process them one by one. Upload each generated image to Google Drive. Save image links and thumbnails to a Google Sheets document. Record token usage and estimated cost into a separate sheet. Set up steps Connect your OpenAI API credentials for image generation. Connect your Google Drive and Google Sheets accounts. Set the destination folder in Google Drive. Set the target Google Sheet and specify the correct sheet tabs. The setup usually takes around 5-10 minutes. Detailed field mappings are already pre-configured inside the workflow. Additional tips and instructions are included as sticky notes inside the workflow. Google Sheet copy url Copy Sheet Link
by Jah coozi
Universal Digital Device Support Assistant Transform any device manual into an intelligent AI assistant that provides 24/7 support for your users. This template works with ANY household appliance, electronic device, or technical equipment. 🎯 Use Cases Manufacturers**: Provide instant support for your products Support Teams**: Reduce ticket volume with AI-powered answers Smart Homes**: Centralized help for all devices Personal Use**: Never lose a manual again ✨ Features Universal Compatibility**: Works with any device type Multi-Language Support**: Serve global customers Intelligent Search**: Semantic understanding of user queries Context Awareness**: Remembers conversation history Easy Setup**: Just upload your manual and go 🛠️ What's Included Webhook Endpoint: Receive user queries via API AI Agent: Processes questions intelligently Vector Database: Stores and searches manuals Memory System: Maintains conversation context Upload Pipeline: Easy manual ingestion 📋 Setup Instructions Add Your Credentials: OpenAI API key (or alternative LLM) Pinecone API key (or alternative vector DB) Upload Device Manuals: Use the manual upload trigger Paste manual text or upload PDF System automatically indexes content Configure Webhook: Set your preferred endpoint path Enable CORS if needed Deploy and share URL Optional Customization: Adjust chunk size for your content Modify system prompts for your brand Add additional tools or integrations 🔧 Supported Devices (Examples) Kitchen Appliances (ovens, dishwashers, coffee machines) Home Entertainment (TVs, sound systems, gaming consoles) Smart Home Devices (thermostats, cameras, lights) Computer Equipment (printers, routers, monitors) Power Tools & Garden Equipment Medical Devices And many more! 🌐 Integration Options Embed in your website Connect to chat platforms Mobile app integration Voice assistant compatibility Email support automation 📈 Benefits Reduce support costs by 70% Available 24/7 in multiple languages Consistent, accurate responses Scales infinitely Improves with usage 🔐 Privacy & Security Your data stays in your control Can be deployed on-premise GDPR compliant architecture No data sharing between devices 💡 Pro Tips Upload manuals in sections for better accuracy Include troubleshooting guides and FAQs Add model numbers and specifications Regular updates keep content fresh Start providing world-class device support today!
by Brian Money
Overview This template is designed for Amazon sellers and advertisers who want to automate their campaign performance analysis and bidding strategy. It solves the common challenge of manually reviewing Sponsored Products reports and guessing how to adjust keywords, placements, and budgets. By combining Amazon Advertising reports with OpenAI's GPT-4o, this workflow delivers real-time, personalized optimization instructions — automatically. Features 📥 Automatically downloads Sponsored Products reports from Google Drive 🧠 Uses AI to analyze campaign, keyword, placement, targeting, and budget performance 📊 Supports both .csv and .xlsx report formats 🔁 Handles multiple ASINs and scales easily across ad accounts 📧 Sends structured optimization recommendations to your inbox via Gmail 🗂 Built-in logic to normalize filenames and correctly map reports 🧹 Includes error handling and formatting cleanup for AI-ready input Requirements To use this workflow, you’ll need: An Amazon Ads account with access to Sponsored Products reports A Google Drive folder where Amazon Ads reports are delivered (manually or via Gmail automation) A Gmail account (for sending summaries) An OpenAI API key with access to GPT-4o Optional: a developer account for the Amazon Ads API to fully automate report generation in the future Setup Instructions 📂 Connect your Amazon Ads reports folder in the Google Drive node 🔐 Add your credentials to the OpenAI and Gmail nodes 📝 Schedule five reports in the Amazon Ads Console: Search Term Report → Detailed Targeting Report → Detailed Campaign Report → Summary Placement Report → Summary Budget Report → Summary Use “Last 30 Days”, “Daily”, and .xlsx or .csv format 🔁 (Optional) Automate report ingestion using Gmail + Drive workflows 🧪 Test with one account, then replicate across additional ad accounts as needed ⏱️ Setup time: 15–30 minutes 📌 All field-specific guidance is included in workflow notes`
by James Carter
This n8n workflow automatically fetches trending news articles based on your chosen country, category, and keyword — then enriches the data with AI-powered business insights before posting a concise summary to Slack. Ideal for sales teams, executives, marketers, or anyone who wants fast, actionable news briefings directly in their Slack workspace. ⸻ Who it’s for Executives, analysts, sales teams, or marketing professionals who want curated, AI-enhanced news summaries tailored to business opportunities, risks, and trends — delivered automatically to Slack. ⸻ How it works / What it does A Schedule Trigger runs on a daily, weekly, or custom frequency. It queries the NewsAPI to retrieve top headlines by country, category, or keyword. Headlines are formatted and enriched with your configured query context. The AI model (GPT-4) analyzes articles and summarizes key insights, categorizing them as Opportunities, Risks, or Trends. Finally, the summarized insights are posted directly into a Slack channel of your choice. ⸻ How to set up Set your schedule frequency in the Schedule Trigger node. Configure your preferred country, category, and keyword in the Inject Config node. Add your NewsAPI Key inside the Fetch News Articles node. Connect your Slack credentials in the Post to Slack node. Optional: Adjust the AI prompt for more tailored analysis. ⸻ Requirements A NewsAPI account to fetch headlines. An OpenAI API key for GPT-4 summarization. A Slack workspace and connected credentials via n8n. ⸻ How to customize the workflow Change the country, category, or keyword in the Inject Config to focus on specific markets or sectors. Adjust the AI prompt in the GPT node to prioritize certain insights like ESG factors, M&A activity, or market sentiment. Extend the workflow to log results to Google Sheets, email summaries, or send SMS alerts. Replace the Schedule Trigger with a Webhook if you want to trigger summaries on demand. This template is designed to be modular, making it easy to adapt for competitive intelligence, investment tracking, or industry news curation.
by Solomon
Learn how to build an MCP Server and Client in n8n with official nodes. > ⚠ Requires n8n version 1.88.0 or higher. In this example, we use Google Calendar and custom functions as two separate MCP Servers, demonstrating how to integrate both native and custom tools. How it works The AI Agent connects to two MCP Servers. Each MCP Trigger (Server) generates a URL exposing its tools. This URL is used by an MCP Client linked to the AI Agent. Whenever you make changes to the tools, there’s no need to modify the MCP Client. It automatically keeps the AI Agent informed on how to use each tool, even if you change them over time. That’s the power of MCP 🙌 Who is this template for Anyone looking to use MCP with their AI Agents. How to set up Instructions are included within the workflow itself. Check out my other templates 👉 https://n8n.io/creators/solomon/
by Adrian Bent
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow scrapes job listings on indeed via Apify, automatically gets that dataset, extracts information about the listing filters jobs off relevance, finds a decision maker at the company and updates a database (google sheets) with that info for outreach. All you need to do is run Apify actor then the database will update with the processed data. Benefits: Complete Job search Automation - A webhook monitors the Apify actor which sends a integration and starts the process AI-Powered Filter - Uses ChatGPT to analyze content/context, identify company goals, and filters based on job description Smart Duplicate Prevention - Automatically tracks processed job listings in a database to avoid redundancy Multi-Platform Intelligence - Combines Indeed scraping, web research via Tavily, and enriches each listing Niche Focus - Process content from multiple niches 6 currently (hardcoded) but can be changed to fit other niches (just prompt the "job filter" node) How It Works: Indeed Job Discovery: Search and apply filter for relevant job listings, copy and use URL in Apify Uses Apify's Indeed job scraper to scrape job listings from the URL of interest Automatically scrapes the information, stores it in a dataset and initiates a integration Oncoming Data Processing: Loops over 500 items (can be changed) with a batch size of 55 items (can be changed) to avoid running into API timeouts. Multiple filters to ensure all fields are scrapped with our required metrics (website must exist and number of employees < 250) Duplicate job listings are removed from oncoming batch to be processed Job Analysis & Filter: An additional filter to remove any job listing from the oncoming batch if it already exists in the google sheets database Then all new job listings gets pasted to chatGPT which uses information about the job post/description to determine if it is relevant to us All relevant jobs get a new field "verdict" which is either true or false and we keep the ones where verdict is true Enrich & Update Database: Uses Tavily to search for a decision maker (doesn't always finds one) and populate a row in google sheet with information about the job listing, the company and a decision maker at that company. Waits for 1 minute and 30 seconds to avoid google sheets and chatGPT API timeouts then loops back to the next batch to start filtering again until all job listings are processed Required Google Sheets Database Setup: Before running this workflow, create a Google Sheets database with these exact column headers: Essential Columns: jobUrl - Unique identifier for job listings title - Position Title descriptionText - Description of job listing hiringDemand/isHighVolumeHiring - Are they hiring at high volume? hiringDemand/isUrgentHire - Are they hiring at high urgency? isRemote - Is this job remote? jobType/0 - Job type: In person, Remote, Part-time, etc. companyCeo/name - CEO name collected from Tavily's search icebreaker - Column for holding custom icebreakers for each job listing (Not completed in the workflow. I will upload another that does this called "Personalized IJSFE") scrapedCeo - CEO name collected from Apify Scraper email - Email listed on for job listing companyName - Name of company that posted the job companyDescription - Description of the company that posted the job companyLinks/corporateWebsite - Website of the company that posted the job companyNumEmployees - Number of employees the company listed that they have location/country - Location of where the job is to take place salary/salaryText - Salary on job listing Setup Instructions: Create a new Google Sheet with these column headers in the first row Name the sheet whatever you please Connect your Google Sheets OAuth credentials in n8n Update the document ID in the workflow nodes The merge logic relies on the id column to prevent duplicate processing, so this structure is essential for the workflow to function correctly. Feel free to reach out for additional help or clarification at my gmail: terflix45@gmail.com and I'll get back to you as soon as I can. Set Up Steps: Configure Apify Integration: Sign up for an Apify account and obtain API key Get indeed job scraper actor and use Apify's integration to send a HTTP request to your n8n webhook (if test URL doesn't work use production URL) Use Apify node with Resource: Dataset, Operation: Get items and use your Api key as your credentials Set Up AI Services: Add OpenAI API credentials for job filtering Add Tavily API credentials for company research Set up appropriate rate limiting for cost control Database Configuration: Create Google Sheets database with provided column structure Connect Google Sheets OAuth credentials Configure the merge logic for duplicate detection Content Filtering Setup: Customize the AI prompts for your specific niche, requirements or interest Adjust the filtering criteria to fit your needs
by Rizqi Pratama Ramadhani
Automated Financial Tracker: Telegram Invoices to Notion with AI Summaries & Reports Tired of manually logging every expense? Streamline your financial tracking with this powerful n8n workflow! Snap a photo of your invoice in Telegram, and let AI (powered by Google Gemini) automatically extract the details, record them in your Notion database, and even send you a quick summary. Plus, get scheduled weekly reports with charts to visualize your spending. Automate your finances, save time, and gain better insights with this easy-to-use template! Transform your expense tracking from a chore into an automated breeze. Try it out! Overview: This workflow revolutionizes how you track your finances by automating the entire process from invoice capture to reporting. Simply send a photo of an invoice or receipt to a designated Telegram chat, and this workflow will: Extract Data with AI: Utilize Google Gemini's capabilities to perform OCR on the image, understand the content, and extract key details like item name, quantity, price, total, date, and even attempt to categorize the expense. Store in Notion: Automatically log each extracted transaction into a structured Notion database. Instant Feedback: Send a summary of the processed transaction back to your Telegram chat. Scheduled Reporting: Generate and send a visual summary of your expenses (e.g., weekly spending by category) as a chart to your preferred Telegram chat or group. This workflow is perfect for individuals, freelancers, or small teams looking to effortlessly manage their expenses without manual data entry. Key Features & Benefits: Effortless Expense Logging:** Just send a picture – no more typing! AI-Powered Data Extraction:** Leverages Google Gemini for intelligent invoice processing. Centralized Data in Notion:** Keep all your financial records neatly organized in a Notion database. Automated Categorization:** AI helps in categorizing your expenses (e.g., Food & Beverage, Transportation). Instant Summaries:** Get immediate confirmation and a summary of what was recorded. Visual Reporting:** Receive scheduled charts (e.g., bar charts of spending by category) directly in Telegram. Customizable:** Easily adapt the workflow to your specific needs, categories, and reporting preferences. Time-Saving:** Drastically reduces the time spent on manual financial administration. How It Works (Workflow Breakdown): The workflow is divided into two main parts: Part 1: Real-time Invoice Processing & Logging (## Auto Notes Transaction with Telegram and Notion database) Telegram Trigger (Telegram Trigger | When recive photo): Activates when a new photo is sent to the configured Telegram chat. Get Photo Info (Get Info Photo from telegram chat): Retrieves the details of the received photo. Get Image Info (Get Image Info): Prepares the image data. AI Data Extraction (Google Gemini Chat Model & Basic LLM Chain): The image data is sent to the Google Gemini Chat Model. A specific prompt instructs the AI to extract details (date, ID, name, quantity, price, total, category, tax) in a JSON array format and provide a summary message. The categories include Food & Beverage, Transportation, Utilities, Shopping, Healthcare, Entertainment, Housing, and Education. Parse AI Output (Parse To your object | Table): Structures the AI's JSON output for easier handling. Split Transactions (Split Out | data transaction): If an invoice contains multiple items, this node splits them into individual records. Record to Notion (Record To Notion Database): Each transaction item is added as a new page/entry in your specified Notion database, mapping fields like Name, Quantity, Price, Total, Category, Date, and Tax. Send Telegram Summary (Sendback to chat and give summarize text): The summary message generated by the AI is sent back to the original Telegram chat. Part 2: Scheduled Financial Reporting (## Schedule report to send on chanel or private message) Schedule Trigger (Schedule Trigger | for send chart report): Runs at a predefined interval (e.g., every week) to generate reports. Get Recent Data from Notion (Get Recent Data from Notions): Fetches transaction data from the Notion database for a specific period (e.g., the past week). Summarize Data (Summarize Transaction Data): Aggregates the data, for example, by summing up the 'total' amount for each 'category'. Prepare Chart Data (Convert Data to JSON chart payload): Transforms the summarized data into a JSON format suitable for generating a chart (e.g., labels for categories, data for spending amounts). Generate Chart (Generate Chart): Uses the QuickChart node to create a visual chart (e.g., a bar chart) from the prepared data. Send Chart to Telegram (Send Chart Image to Group or Private Chat): Sends the generated chart image to a specified Telegram chat ID or group. Nodes Used (Key Nodes): Telegram Trigger & Telegram Node:** For receiving images and sending messages/images. Google Gemini Chat Model (Langchain):** For AI-powered OCR and data extraction from invoices. Basic LLM Chain (Langchain):** To interact with the language model using specific prompts. Output Parser Structured (Langchain):** To structure the output from the language model. Notion Node:** For reading from and writing to your Notion databases. Schedule Trigger:** To automate the reporting process. Summarize Node:** To aggregate data for reports. Code Node:** Used here to format data for the chart. QuickChart Node:** For generating charts. SplitOut Node:** To process multiple items from a single invoice. Setup Instructions: Credentials: Telegram: Create a Telegram bot and get its API token. You'll also need the Chat ID where you'll send invoices and where reports should be sent. Google Gemini (PaLM) API: You'll need an API key for Google Gemini. Notion: Create a Notion integration and get the API key. Create a Notion database with properties corresponding to the data you want to save (e.g., Name (Title), Quantity (Number), Price (Number), Total (Number), Category (Select), Date (Text or Date), Tax (Number)). Share this database with your Notion integration. Configure Telegram Trigger: Add your Telegram Bot API token. When you first activate the workflow or test the trigger, send /start to your bot in the chat you want to use for sending invoices. n8n will then capture the Chat ID. Configure Google Gemini Node (Google Gemini Chat Model): Select or add your Google Gemini API credentials. Review the prompt in the Basic LLM Chain node and adjust if necessary (e.g., date format, categories). Configure Notion Nodes: Record To Notion Database: Select or add your Notion API credentials. Select your target Notion Database ID. Map the properties from the workflow (e.g., ={{ $json.name }}) to your Notion database columns. Get Recent Data from Notions: Select or add your Notion API credentials. Select your target Notion Database ID. Adjust the filter if needed (default is "past_week"). Configure Telegram Node for Reports (Send Chart Image to Group or Private Chat): Select or add your Telegram Bot API token. Enter the Chat ID for the group or private chat where you want to receive the reports. Configure Schedule Trigger (Schedule Trigger | for send chart report): Set your desired schedule (e.g., every Monday at 9 AM). Test: Send an image of an invoice to your Telegram bot and check if the data appears in Notion and if you receive a summary message. Wait for the scheduled report or manually trigger it to test the reporting functionality. Sticky Note Text for Your n8n Template: (These are suggestions. You would place these directly into the sticky notes within your n8n workflow editor.) Existing High-Level Sticky Notes: ## Auto Notes Transaction with Telegram and Notion database ## Schedule report to send on chanel or private message Specific Sticky Notes to Add: On Telegram Trigger | When recive photo:** 📸 INVOICE INPUT 📸 Bot listens here for photos of your receipts/invoices. Ensure your Telegram Bot API token is set in credentials. Near Google Gemini Chat Model & Basic LLM Chain:** 🤖 AI MAGIC HAPPENS HERE 🧠 Image is sent to Google Gemini for data extraction. Check 'Basic LLM Chain' to customize the AI prompt (e.g., categories, output format). Requires Google Gemini API credentials. On Parse To your object | Table:** ✨ STRUCTURING AI DATA ✨ Converts the AI's text output into a usable JSON object. Check the schema if you modify the AI prompt significantly. On Record To Notion Database:** 📝 SAVING TO NOTION 📝 Extracted transaction data is saved here. Configure with your Notion API key & Database ID. Map fields correctly to your database columns! On Sendback to chat and give summarize text:** 💬 TRANSACTION SUMMARY 💬 Sends a confirmation message back to the user in Telegram with a summary of the recorded expense. On Schedule Trigger | for send chart report:** 🗓️ REPORTING SCHEDULE 🗓️ Set how often you want to receive your spending report (e.g., weekly, monthly). On Get Recent Data from Notions:** 📊 FETCHING DATA FOR REPORT 📊 Retrieves transactions from Notion for the report period. Default: "Past Week". Adjust filter as needed. Requires Notion API credentials & Database ID. On Summarize Transaction Data:** ➕ SUMMARIZING SPENDING ➕ Aggregates your expenses, usually by category, to prepare for the chart. On Convert Data to JSON chart payload (Code Node):** 🎨 PREPARING CHART DATA 🎨 This Code node formats the summarized data into the JSON structure needed by QuickChart. On Generate Chart (QuickChart Node):** 📈 GENERATING VISUAL REPORT 📈 Creates the actual chart image based on your spending data. You can customize chart type (bar, pie, etc.) here. On Send Chart Image to Group or Private Chat:** 📤 SENDING REPORT TO TELEGRAM 📤 Delivers the generated chart to your chosen Telegram chat/group. Set the correct Chat ID and Bot API token. General Sticky Note (Place where relevant):** 🔑 CREDENTIALS NEEDED 🔑 Remember to set up API keys/tokens for: Telegram Google Gemini Notion General Sticky Note (Place where relevant):** 💡 CUSTOMIZE ME! 💡 Adjust AI prompts for better accuracy. Change Notion database structure. Modify report frequency and content. `
by Davide
This workflow allows users to generate AI videos using the cheaper model Google Veo3 Fast, save them to Google Drive, generate optimized titles with GPT-4o, and automatically upload them to YouTube and TikTok with Upload-Post. The entire process is triggered from a Google Sheet that acts as the central interface for input and output. IT automates video creation, uploading, and tracking, ensuring seamless integration between Google Sheets, Google Drive, Google Veo3 Fast, TikTok and YouTube. Benefits of this Workflow 💡 No Code Interface**: Trigger and control the video production pipeline from a simple Google Sheet. ⚙️ Full Automation**: Once set up, the entire video generation and publishing process runs hands-free. 🧠 AI-Powered Creativity**: Generates engaging YouTube and TikTok titles using GPT-4o. Leverages advanced generative video AI from Google Veo3. 📁 Cloud Storage & Backup**: Stores all generated videos on Google Drive for safekeeping. 📈 YouTube Ready**: Automatically uploads to YouTube with correct metadata, saving time and boosting visibility. 📈 TikTok Ready**: Automatically uploads to TikTok with correct metadata, saving time and boosting visibility. 🧪 Scalable**: Designed to process multiple video prompts by looping through new entries in Google Sheets. 🔒 API-First**: Utilizes secure API-based communication for all services. How It Works Trigger: The workflow can be started manually ("When clicking ‘Test workflow’") or scheduled ("Schedule Trigger") to run at regular intervals (e.g., every 5 minutes). Fetch Data: The "Get new video" node retrieves unfilled video requests from a Google Sheet (rows where the "VIDEO" column is empty). Video Creation: The "Set data" node formats the prompt and duration from the Google Sheet. The "Create Video" node sends a request to the Fal.run API (Google Veo3 Fast) to generate a video based on the prompt. Status Check: The "Wait 60 sec." node pauses execution for 60 seconds. The "Get status" node checks the video generation status. If the status is "COMPLETED," the workflow proceeds; otherwise, it waits again. Video Processing: The "Get Url Video" node fetches the video URL. The "Generate title" node uses OpenAI (GPT-4.1) to create an SEO-optimized YouTube and TikTok title. The "Get File Video" node downloads the video file. Upload & Update: The "Upload Video" node saves the video to Google Drive. The "HTTP Request" node uploads the video to YouTube via the Upload-Post API. The "HTTP Request" node uploads the video to TikTok via the Upload-Post API. The "Update Youtube URL" and "Update result" nodes update the Google Sheet with the video URL and YouTube link. Set Up Steps Google Sheet Setup: Create a Google Sheet with columns: PROMPT, DURATION, VIDEO, and YOUTUBE_URL. Share the Sheet link in the "Get new video" node. API Keys: Obtain a Fal.run API key (for Veo3) and set it in the "Create Video" node (Header: Authorization: Key YOURAPIKEY). Get an Upload-Post API key (for YouTube uploads) and configure the "HTTP Request" node (Header: Authorization: Apikey YOUR_API_KEY). Get an Upload-Post API key (for TikTok uploads) and configure the "HTTP Request" node (Header: Authorization: Apikey YOUR_API_KEY). YouTube Upload Configuration: Replace YOUR_USERNAME in the "HTTP Request" node with your Upload-Post profile name. Schedule Trigger: Configure the "Schedule Trigger" node to run periodically (e.g., every 5 minutes). Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Muhammad Ashar
How It Works – Your AI Marketing Team in Action This automation acts as your AI-powered content and image marketing assistant inside Telegram. With just a voice note or text message, it can: 🧠 Understand your request – Whether you send a message or speak into Telegram, it transcribes and processes your input using GPT-4. 🎨 Create and edit content – Based on what you say, it can generate: ✍️ Blog posts 💼 LinkedIn posts 🎬 Faceless videos 🖼️ AI-generated images 🪄 Edits to existing images 🔎 Searches through your image database 💬 Replies directly in Telegram – It sends you back the result—whether that’s a post, image, or video link—without leaving the app. 🧩 Built using LangChain agent logic – It intelligently chooses the right tool from a suite of sub-workflows like "Create Image", "Blog Post", or "Video" using agent reasoning. 🛠️ Setup Steps – Get Started in Minutes! ⌛ Time Estimate: ~15–30 minutes (faster if you're familiar with n8n) 🔗 1. Import the Template Pack 📥 Download and install these workflows into your n8n: Create Image, Edit Image, Search Images Blog Post, LinkedIn Post, Video 🔐 2. Add Required Credentials Telegram Bot 🤖 OpenRouter AI 🧠 Tavily API (for smart research) 📚 ElevenLabs 🎙️ (for voice in videos) PiAPI & Runway 🎞️ (for faceless videos) 🧩 3. Link the Tools to the Agent Node – Make sure the "Marketing Team Agent" is connected to each of the content creation tools as shown in the workflow. 📎 4. Download Templates & Logs 🧾 Google Sheets Log Template (to track output) 🖼️ Creatomate Template (optional for enhanced image control – shared in Skool group) 📌 Pro Tip: All detailed step-by-step setup instructions are included as sticky notes inside the n8n canvas. Just follow along!
by Don Jayamaha Jr
📊 This AI sub-agent aggregates Tesla (TSLA) trading signals across multiple timeframes using real-time technical indicators and candlestick behavior. It is a core component of the Tesla Quant Trading AI system. Powered by GPT-4.1, it consolidates 15-minute, 1-hour, and 1-day indicators, adds candlestick pattern data, and produces a unified JSON signal for downstream use by the master agent. ⚠️ This agent is not standalone. It is triggered by the Tesla Quant Trading AI Agent via Execute Workflow. 🧠 Requires: 4 connected sub-agents and Alpha Vantage Premium API Key 🔌 Required Sub-Workflows To use this workflow, you must install: Tesla 15min Indicators Tool Tesla 1hour Indicators Tool Tesla 1day Indicators Tool Tesla 1hour and 1day Klines Tool Tesla Quant Technical Indicators Webhooks Tool (provides Alpha Vantage data) 🧠 What This Agent Does Fetches pre-cleaned 20-point JSON outputs from the 4 sub-agents listed above Analyzes each timeframe individually: 15m: momentum and short-term setups 1h: confirmation of emerging trends 1d: macro positioning and trend alignment Klines: candlestick reversal patterns and volume divergence Generates a structured final signal in JSON with: Trading stance: Buy, Sell, Hold, or Cautious Confidence score (0.0–1.0) Multi-timeframe indicator breakdown Candlestick and volume divergence annotations 📋 Sample Output { "summary": "TSLA momentum is weakening short-term. 1h MACD shows bearish crossover, RSI declining. 1d candles confirm potential reversal setup.", "signal": "Cautious Sell", "confidence": 0.81, "multiTimeframeInsights": { "15m": { "RSI": 68.3, "MACD": { "macd": 0.53, "signal": 0.61 }, ... }, "1h": { "RSI": 65.0, "MACD": { "macd": -0.32, "signal": 0.11 }, ... }, "1d": { "BBANDS": { ... }, ... }, "candlestickPatterns": { "1h": "Doji", "1d": "Bearish Engulfing" }, "volumeDivergence": { "1h": "Bearish", "1d": "Neutral" } } } 🛠️ Setup Instructions Import this workflow into n8n Name it: Tesla_Financial_Market_Data_Analyst_Tool Add Required API Credentials Alpha Vantage Premium (via HTTP Query Auth) OpenAI GPT-4.1 for reasoning and synthesis Link Required Sub-Agents Connect the 4 tool workflows listed above to their respective Tool Workflow nodes Connect the webhook provider for data fetches Set Up as Sub-Agent This workflow must be triggered using Execute Workflow from the parent agent Pass in: message (optional context) sessionId (used for memory continuity) 🧾 Sticky Notes Provided 📘 Tesla Financial Market Data Analyst — Core logic overview 📈 15m / 1h / 1d Tool Notes — Indicator lists + use cases 🕯️ Klines Tool Note — Candlestick and volume divergence patterns 🧠 GPT Reasoning Note — GPT-4.1 handles final synthesis 🧩 Sub-Workflow Trigger — Proper integration with parent agent 🧠 Memory Buffer — Maintains session context across evaluations 🔒 Licensing & Support © 2025 Treasurium Capital Limited Company The logic, prompt design, and multi-agent architecture are proprietary and IP-protected. For support or collaboration inquiries: 🔗 Don Jayamaha – LinkedIn 🔗 n8n Creator Profile 🚀 Unify your Tesla trading logic across timeframes—automated, AI-powered, and built for scalers and swing traders.
by Einar César Santos
🧠 Long-Term Memory System for AI Agents with Vector Database Transform your AI assistants into intelligent agents with persistent memory capabilities. This production-ready workflow implements a sophisticated long-term memory system using vector databases, enabling AI agents to remember conversations, user preferences, and contextual information across unlimited sessions. 🎯 What This Template Does This workflow creates an AI assistant that never forgets. Unlike traditional chatbots that lose context after each session, this implementation uses vector database technology to store and retrieve conversation history semantically, providing truly persistent memory for your AI agents. 🔑 Key Features Persistent Context Storage**: Automatically stores all conversations in a vector database for permanent retrieval Semantic Memory Search**: Uses advanced embedding models to find relevant past interactions based on meaning, not just keywords Intelligent Reranking**: Employs Cohere's reranking model to ensure the most relevant memories are used for context Structured Data Management**: Formats and stores conversations with metadata for optimal retrieval Scalable Architecture**: Handles unlimited conversations and users with consistent performance No Context Window Limitations**: Effectively bypasses LLM token limits through intelligent retrieval 💡 Use Cases Customer Support Bots**: Remember customer history, preferences, and previous issues Personal AI Assistants**: Maintain user preferences and conversation continuity over months or years Knowledge Management Systems**: Build accumulated knowledge bases from user interactions Educational Tutors**: Track student progress and adapt teaching based on history Enterprise Chatbots**: Maintain context across departments and long-term projects 🛠️ How It Works User Input: Receives messages through n8n's chat interface Memory Retrieval: Searches vector database for relevant past conversations Context Integration: AI agent uses retrieved memories to generate contextual responses Response Generation: Creates informed responses based on historical context Memory Storage: Stores new conversation data for future retrieval 📋 Requirements OpenAI API Key**: For embeddings and chat completions Qdrant Instance**: Cloud or self-hosted vector database Cohere API Key**: Optional, for enhanced retrieval accuracy n8n Instance**: Version 1.0+ with LangChain nodes 🚀 Quick Setup Import this workflow into your n8n instance Configure credentials for OpenAI, Qdrant, and Cohere Create a Qdrant collection named 'ltm' with 1024 dimensions Activate the workflow and start chatting! 📊 Performance Metrics Response Time**: 2-3 seconds average Memory Recall Accuracy**: 95%+ Token Usage**: 50-70% reduction compared to full context inclusion Scalability**: Tested with 100k+ stored conversations 💰 Cost Optimization Uses GPT-4o-mini for optimal cost/performance balance Implements efficient chunking strategies to minimize embedding costs Reranking can be disabled to save on Cohere API costs Average cost: ~$0.01 per conversation 📖 Learn More For a detailed explanation of the architecture and implementation details, check out the comprehensive guide: Long-Term Memory for LLMs using Vector Store - A Practical Approach with n8n and Qdrant 🤝 Support Documentation**: Full setup guide in the article above Community**: Share your experiences and get help in n8n community forums Issues**: Report bugs or request features on the workflow page Tags: #AI #LangChain #VectorDatabase #LongTermMemory #RAG #OpenAI #Qdrant #ChatBot #MemorySystem #ArtificialIntelligence