by Don Jayamaha Jr
๐ This AI sub-agent aggregates Tesla (TSLA) trading signals across multiple timeframes using real-time technical indicators and candlestick behavior. It is a core component of the Tesla Quant Trading AI system. Powered by GPT-4.1, it consolidates 15-minute, 1-hour, and 1-day indicators, adds candlestick pattern data, and produces a unified JSON signal for downstream use by the master agent. โ ๏ธ This agent is not standalone. It is triggered by the Tesla Quant Trading AI Agent via Execute Workflow. ๐ง Requires: 4 connected sub-agents and Alpha Vantage Premium API Key ๐ Required Sub-Workflows To use this workflow, you must install: Tesla 15min Indicators Tool Tesla 1hour Indicators Tool Tesla 1day Indicators Tool Tesla 1hour and 1day Klines Tool Tesla Quant Technical Indicators Webhooks Tool (provides Alpha Vantage data) ๐ง What This Agent Does Fetches pre-cleaned 20-point JSON outputs from the 4 sub-agents listed above Analyzes each timeframe individually: 15m: momentum and short-term setups 1h: confirmation of emerging trends 1d: macro positioning and trend alignment Klines: candlestick reversal patterns and volume divergence Generates a structured final signal in JSON with: Trading stance: Buy, Sell, Hold, or Cautious Confidence score (0.0โ1.0) Multi-timeframe indicator breakdown Candlestick and volume divergence annotations ๐ Sample Output { "summary": "TSLA momentum is weakening short-term. 1h MACD shows bearish crossover, RSI declining. 1d candles confirm potential reversal setup.", "signal": "Cautious Sell", "confidence": 0.81, "multiTimeframeInsights": { "15m": { "RSI": 68.3, "MACD": { "macd": 0.53, "signal": 0.61 }, ... }, "1h": { "RSI": 65.0, "MACD": { "macd": -0.32, "signal": 0.11 }, ... }, "1d": { "BBANDS": { ... }, ... }, "candlestickPatterns": { "1h": "Doji", "1d": "Bearish Engulfing" }, "volumeDivergence": { "1h": "Bearish", "1d": "Neutral" } } } ๐ ๏ธ Setup Instructions Import this workflow into n8n Name it: Tesla_Financial_Market_Data_Analyst_Tool Add Required API Credentials Alpha Vantage Premium (via HTTP Query Auth) OpenAI GPT-4.1 for reasoning and synthesis Link Required Sub-Agents Connect the 4 tool workflows listed above to their respective Tool Workflow nodes Connect the webhook provider for data fetches Set Up as Sub-Agent This workflow must be triggered using Execute Workflow from the parent agent Pass in: message (optional context) sessionId (used for memory continuity) ๐งพ Sticky Notes Provided ๐ Tesla Financial Market Data Analyst โ Core logic overview ๐ 15m / 1h / 1d Tool Notes โ Indicator lists + use cases ๐ฏ๏ธ Klines Tool Note โ Candlestick and volume divergence patterns ๐ง GPT Reasoning Note โ GPT-4.1 handles final synthesis ๐งฉ Sub-Workflow Trigger โ Proper integration with parent agent ๐ง Memory Buffer โ Maintains session context across evaluations ๐ Licensing & Support ยฉ 2025 Treasurium Capital Limited Company The logic, prompt design, and multi-agent architecture are proprietary and IP-protected. For support or collaboration inquiries: ๐ Don Jayamaha โ LinkedIn ๐ n8n Creator Profile ๐ Unify your Tesla trading logic across timeframesโautomated, AI-powered, and built for scalers and swing traders.
by Saswat Saubhagya Rout
๐ Use Case This n8n workflow automates the creation and publication of technical blog posts based on a list of topics stored in Google Sheets. It fetches context using Tavily and Wikipedia, generates Markdown-formatted content with Gemini AI, commits it to a GitHub repository, and updates a Jekyll-powered blog โ all without manual intervention. Ideal for developers, bloggers, or content teams who want to streamline technical content creation and publishing. โ๏ธ Setup Instructions ๐ Prerequisites n8n (cloud or self-hosted) Tavily API key Google Sheets with blog topics Gemini (Google Palm) API key GitHub repository (Jekyll enabled) GitHub OAuth2 credentials Google OAuth2 credentials ๐งฉ Setup Steps Import the workflow JSON into your n8n instance. Set up the following credentials in n8n: Tavily API Google Sheets OAuth2 Google Palm/Gemini AI GitHub OAuth2 Prepare your Google Sheet: Columns: Title, status, row_number Set status to blank for topics to be picked up. Configure: GitHub repo and _posts/ path Jekyll setup (front matter, _config.yml, GitHub Pages) Adjust prompt/custom parameters if needed. Enable and deploy the workflow. Schedule it daily or trigger manually. ๐ Workflow Details | Node | Function | |------|----------| | Schedule Trigger | Triggers the flow at a set interval | | Google Sheets (Get Topic) | Fetches the next incomplete blog topic | | Extract Topic | Parses topic text from the sheet | | Tavily Search | Gathers up-to-date content related to the topic | | Wikipedia Tool | Optionally adds more context or images | | Summarize Results | Formats the context for the AI | | Gemini AI Agent (LangChain) | Generates a Markdown blog post with YAML front matter | | Set File Parameters | Prepares the filename, content, and commit message | | GitHub Commit | Uploads the .md file to the _posts/ directory | | Update Google Sheet | Marks topic as done after successful commit | ๐ ๏ธ Customization Options Change LLM prompt (e.g. tone, depth, format). Use OpenAI instead of Gemini by switching nodes. Modify filename pattern or GitHub repo path. Add Slack/Discord notifications after publish. Extend flow to upload images or embed YouTube links. โ ๏ธ Community Nodes Used This workflow uses the following community nodes: @tavily/n8n-nodes-tavily.tavily โ for deep search > โ ๏ธ Ensure these are installed and enabled in your n8n instance. ๐ก Pro Tips Use GitHub Actions to trigger an automatic Jekyll build post-commit. Structure blog posts with front matter, headings, and table of contents for SEO. Set Schedule Trigger to daily at a fixed time to keep content flowing. Enhance formatting in AI output using code blocks, images, and lists. โ Example Output title: "How LLMs Are Changing Web Development" date: "2025-07-25" categories: [webdev, AI] tags: [LLM, Gemini, n8n, automation] excerpt: "Learn how LLMs like Gemini are transforming how we generate and deploy developer content." author: "Saswat Saubhagya" Table of Contents Introduction Understanding LLMs Use Cases in Web Development Challenges Conclusion ...
by Joseph
Here is your refined template description with detailed step-by-step instructions, markdown formatting, and customization guidance. YouTube Transcript Extraction Workflow This n8n workflow extracts and processes transcripts from YouTube videos using the YouTube Transcript API on RapidAPI. It allows users to retrieve subtitles from YouTube videos, clean them up, and return structured transcript data for further processing. Table of Contents Problem Statement & Target Audience Pre-conditions & API Requirements Step-by-Step Workflow Explanation Customization Guide How to Set Up This Workflow Problem Statement & Target Audience Who is this for? This workflow is ideal for content creators, researchers, and developers who need to: Extract subtitles from YouTube videos automatically. Format and clean** transcript data for readability. Use transcripts for summarization, content repurposing, or language analysis. Pre-conditions & API Requirements API Required YouTube Transcript API** (RapidAPI) n8n Setup Prerequisites A running n8n instance (Installation Guide) A RapidAPI account to access the YouTube Transcript API An API key from RapidAPI to authenticate requests Step-by-Step Workflow Explanation 1. Input YouTube Video URL (Trigger) This step provides a simple input form where users enter a YouTube video URL. 2. HTTP Request Node (Retrieve Transcript Data) Makes a POST request to the YouTube Transcript API via RapidAPI. Passes the video URL received from the input form. Uses an environment variable to store the API key securely. 3. Function Node (Process Transcript) Receives* the API response containing the *raw transcript**. Processes and cleans** the transcript: Removes unwanted characters. Formats text for readability. Handles errors** when no transcript is available. Outputs* both the *raw and cleaned transcript** for further use. 4. Set Field Node (Response Formatting) Structures** the processed transcript data into a user-friendly format. Returns** the final transcript data to the client. Customization Guide 1. Modify Transcript Cleaning Rules Update the Function Node to apply custom text processing, such as: Removing timestamps. Changing the output format (e.g., JSON, plain text). 2. Store Transcripts in a Database Add a Database Node (e.g., MySQL, PostgreSQL, or Firebase) to save transcripts. 3. Generate Summaries from Transcripts Integrate AI services (e.g., OpenAI, Google Gemini) to summarize transcripts. 4. Convert Transcripts into Speech Use ElevenLabs API to generate an AI-powered voiceover from transcripts. How to Set Up This Workflow Step 1: Import the Workflow into n8n Download or copy the workflow JSON file. Import it into your n8n instance. Step 2: Set Up the API Key Sign up for the YouTube Transcript API. Subscribe to the api. Copy and paste your api key where the "your_api_key" is. Step 3: Activate the Workflow Start the workflow in n8n. Enter a YouTube video URL in the input form. The workflow will return a cleaned transcript. This workflow ensures seamless YouTube transcript extraction and processing with minimal manual effort. ๐
by Ferenc Erb
Overview An automation workflow that creates a complete REST API for digitally signing PDF documents using n8n webhooks. This service demonstrates how to implement secure document signing functionality through standardized API endpoints with file upload and download capabilities. Use Case This workflow is designed for developers and automation specialists who need to implement digital document signing. It's particularly useful for: Integrating PDF signing capabilities into existing document workflows API-based automation of signature processes Creating proof-of-concept implementations for document verification systems Learning n8n's webhook capabilities and file handling techniques Testing PDF signing in development environments before production implementation What This Workflow Does API-Based Document Management Exposes RESTful webhook endpoints for all document operations Handles multipart/form-data uploads for PDF documents Processes JSON payloads for signing configuration Provides download functionality for completed documents Digital Certificate Handling Uploads existing PFX/PKCS#12 digital certificates Generates new certificates with customizable attributes Securely manages certificate storage and access Associates certificates with signing operations Cryptographic PDF Signing Applies digital signatures using industry-standard cryptographic methods Embeds signature information within PDF document structure Validates document integrity through cryptographic verification Preserves original document while adding signature elements Webhook Integration System Routes different API methods to appropriate handlers Validates request payloads and file content Manages authentication through webhook paths Returns structured responses for integration with other systems Technical Architecture Components API Gateway: n8n webhook nodes that receive external requests Request Router: Switch nodes that direct operations based on method parameters Document Processor: Function nodes for PDF manipulation and verification Certificate Manager: Specialized nodes for cryptographic key operations Storage Interface: File operation nodes for document persistence Response Formatter: Nodes that structure API responses Integration Flow Client Request โ Webhook Endpoint โ Method Router โ Processing Engine โ Digital Signing โ Storage โ Response Generation โ Client Response Setup Instructions Prerequisites n8n installation (minimum version 0.214.0) Node.js 14 or higher Required environment variable: NODE_FUNCTION_ALLOW_EXTERNAL: "node-forge,@signpdf/signpdf,@signpdf/signer-p12,@signpdf/placeholder-plain" Configuration Steps Import Workflow Import the workflow JSON into your n8n instance Activate the workflow to enable the webhooks Configure Storage Set the storage path variables in the workflow Ensure proper permissions on the storage directories Test API Endpoints Use the included test scripts to verify functionality Test PDF upload, certificate generation, and signing Integration Document the webhook URLs for integration with other systems Configure error handling according to your requirements Testing Methods Test the workflow functionality using various HTTP requests and JSON data: Upload PDF documents to the document processing endpoint Upload or generate digital certificates Execute PDF signing operations Download signed documents from the download endpoint Webhook Endpoints The workflow exposes two primary webhook endpoints that form a complete API for PDF digital signing operations: 1. Document Processing Endpoint (/webhook/docu-digi-sign) This endpoint handles all document and certificate operations: Method: Upload PDF HTTP: POST Content-Type: multipart/form-data Parameters: method, uploadType, fileName, fileData Method: Upload Certificate HTTP: POST Content-Type: multipart/form-data Parameters: method, uploadType, fileName, fileData Method: Generate Certificate HTTP: POST Content-Type: application/json Parameters: method, subjectCN, issuerCN, serialNumber, validFrom, validTo, password Method: Sign PDF HTTP: POST Content-Type: application/json Parameters: method, inputPdf, pfxFile, pfxPassword 2. Document Download Endpoint (/webhook/docu-download) This endpoint handles the retrieval of processed documents: Method: Download Signed PDF HTTP: GET Content-Type: application/json Parameters: method, fileType, fileName Key Workflow Sections The workflow is organized into logical sections with clear responsibilities: Request Processing**: Parses incoming webhook data Method Routing**: Directs requests to appropriate handlers Document Management**: Handles file operations and storage Cryptographic Operations**: Manages signing and certificate functions Response Formatting**: Structures and returns results
by Akash Kankariya
๐ Discover trending and viral YouTube videos easily with this powerful n8n automation! This workflow helps you perform bulk research on YouTube videos related to any search term, analyzing engagement data like views, likes, comments, and channel statistics โ all in one streamlined process. โจ Perfect for: Content creators wanting to find viral video ideas Marketers analyzing competitor content YouTubers optimizing their content strategy How It Works ๐ฏ 1๏ธโฃ Input Your Search Term โ Simply enter any keyword or topic you want to research. 2๏ธโฃ Select Video Format โ Choose between short, medium, or long videos. 3๏ธโฃ Choose Number of Videos โ Define how many videos to analyze in bulk. 4๏ธโฃ Automatic Data Fetch โ The workflow grabs video IDs, then fetches detailed video data and channel statistics from the YouTube API. 5๏ธโฃ Performance Scoring โ Videos are scored based on engagement rates with easy-to-understand labels like ๐ HOLY HELL (viral) or ๐ Dead. 6๏ธโฃ Export to Google Sheets โ All data, including thumbnails and video URLs, is appended to your Google Sheet for comprehensive review and easy sharing. Setup Instructions ๐ ๏ธ Google API Key Get your YouTube Data API key from Google Developers Console. Add it securely in the n8n credentials manager (do not hardcode). Google Sheets Setup Create a Google Sheet to store your results (template link is provided). Share the sheet with your Google account used in n8n. Update the workflow with your sheet's Document ID and Sheet Name if needed. Run the Workflow Trigger the form webhook via browser or POST call. Enter search term, format, and number of videos. Let it process and check your Google Sheet for insights! Features โจ Bulk fetches the latest and top-viewed YouTube videos. Intelligent video performance scoring with emojis for quick insights ๐ฅ๐ฌ. Organizes data into Google Sheets with thumbnail previews ๐ผ๏ธ. Easy to customize search parameters via an intuitive form. Fully automated, no manual API calls needed. Get Started Today! ๐ Boost your YouTube content strategy and stay ahead with this powerful viral video research automation! Try it now on your n8n instance and tap into the world of viral content like a pro ๐ฅ๐ก
by Hichul
n8n workflow template description [template] This workflow automatically drafts replies to your emails using an OpenAI Assistant, streamlining your inbox management. It's designed for support teams, sales professionals, or anyone looking to accelerate their email response process by leveraging AI to create context-aware draft replies in Gmail. How it works The workflow runs on a schedule (every minute) to check for emails with a specific label in your Gmail account. It takes the content of the newest email in a thread and sends it to your designated OpenAI Assistant for processing. A draft reply is generated by the AI assistant. This AI-generated reply is then added as a draft to the original email thread in Gmail. Finally, the initial trigger label is removed from the email thread to prevent it from being processed again. Set up steps Connect your accounts: You'll need to connect your Gmail and OpenAI accounts in the respective nodes. Configure the trigger: In the "Get threads with specific labels" Gmail node, specify the label that you want to use to trigger the workflow (e.g., generate-reply). Any email you apply this label to will be processed. Select your OpenAI Assistant: In the "Ask OpenAI Assistant" node, choose the pre-configured Assistant you want to use for generating replies. Configure label removal: In the "Remove AI label from email" Gmail node, ensure the same trigger label is selected to be removed after the draft has been successfully created. Activate the workflow: Save and activate the workflow to begin automating your email replies.
by vinci-king-01
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This workflow automatically monitors competitor prices, analyzes market demand, and optimizes product pricing in real-time for maximum profitability using advanced AI algorithms. Key Steps Hourly Trigger - Runs automatically every hour for real-time price optimization and competitive response. Multi-Platform Competitor Monitoring - Uses AI-powered scrapers to track prices from Amazon, Best Buy, Walmart, and Target. Market Demand Analysis - Analyzes Google Trends data to understand search volume trends and seasonal patterns. Customer Sentiment Analysis - Reviews customer feedback to assess price sensitivity and value perception. AI Pricing Optimization - Calculates optimal prices using weighted factors including competitor positioning, demand indicators, and inventory levels. Automated Price Updates - Directly updates e-commerce platform prices when significant opportunities are identified. Comprehensive Analytics - Logs all pricing decisions and revenue projections to Google Sheets for performance tracking. Set up steps Setup time: 15-20 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for AI-powered competitor and market analysis. Set up e-commerce API connection - Connect your e-commerce platform API for automated price updates. Configure Google Sheets - Set up Google Sheets connections for pricing history and revenue analytics logging. Set up Slack notifications - Connect your Slack workspace for real-time pricing alerts and team updates. Customize product catalog - Modify the product configuration with your actual products, costs, and pricing constraints. Adjust monitoring frequency - Change the trigger timing based on your business needs (hourly, daily, etc.). Configure competitor platforms - Update competitor URLs and selectors for your target market. What you get Real-time price optimization** with 15-25% potential revenue increase through intelligent pricing Competitive intelligence** with automated monitoring of major e-commerce platforms Market demand insights** with seasonal and trend-based pricing adjustments Customer sentiment analysis** to understand price sensitivity and value perception Automated price updates** when significant opportunities are identified (>2% change, >70% confidence) Comprehensive analytics** with pricing history, revenue projections, and performance tracking Team notifications** with detailed market analysis and pricing recommendations Margin protection** with intelligent constraints to maintain profitability
by Kanaka Kishore Kandregula
Daily Magento 2 stock check Automation It identifies SKUs with low inventory per source and sends daily alerts via: ๐ฌ Gmail (HTML email) ๐ฌ Slack (formatted text message) This automation empowers store owners and operations teams to stay ahead of inventory issues by proactively monitoring stock levels across all Magento 2 sources. By receiving early alerts for low-stock products, businesses can restock before items sell outโensuring continuous product availability, reducing missed sales opportunities, and maintaining customer trust. Avoiding stockouts not only protects your brand reputation but also keeps your store competitive by preventing customers from turning to competitors due to unavailable items. Timely restocking leads to higher fulfillment rates, improved customer satisfaction, and ultimately, stronger revenue and long-term loyalty. โ Features: Filters out configurable, virtual, and downloadable products Uses Magento 2 MSI stock per source Customizable thresholds (default: โค10 overall or โค5 per source) HTML-formatted email report Slack notification with a code-formatted Runs daily via Cron (08:50 AM) No need of any 3rd part Modules One time Setup ๐ Credentials Used HTTP Request (Magento 2 REST API using Bearer Token) Gmail (OAuth2) Slack (OAuth2 or Webhook) ๐ Tags Magento, Inventory, MSI, Stock Alert, Ecommerce, Slack, Gmail, Automation ๐ Category E-commerce โ Magento 2 (Adobe Commerce) ๐ค Author Kanaka Kishore Kandregula Certified Magento 2 Developer https://gravatar.com/kmyprojects https://www.linkedin.com/in/kanakakishore
by Chad McGreanor
Overview This workflow automates LinkedIn posts using OpenAI. The prompts are stored in the workflow and can be customized as needed to fit your needs. The workflow uses a combination of a Schedule Trigger, some code that determines what day of the week it is (no posting Friday - Sunday), a prompts node to set your OpenAI prompts, a random selection of a prompt so that you are not generating content that looks repetitive. We send that all to OpenAI API, select a random time, have the final LinkedIn post sent to your Telegram for approval, once approved wait for the correct time slot, and then Post to your LinkedIn account using the LinkedIn node. How it works: Run or schedule the workflow in n8n The automation can be triggered manually or on a custom schedule (excluding weekends if needed). You should customize the prompts in the Prompt Node to suit your needs. A random LinkedIn post prompt is selected Pre-written prompts are rotated to keep content fresh and non-repetitive. OpenAI generates the LinkedIn post The prompt is sent to OpenAI via API, and the result is returned in clean, ready-to-use form. You receive the draft via Telegram. The post is sent to Telegram for quick approval or review. Post is scheduled or published via the LinkedIn Connector Once approved, the workflow delays until the target time, then sends the content to LinkedIn. What's needed: An OpenAPI API key, LinkedIn Account, and a Telegram Account. For Telegram you will need to configure the Bot service. Step-by-Step: Telegram Approval for Your Workflow A. Set Up a Telegram Bot Open Telegram and search for โ@BotFatherโ. Start a chat and type /newbot to create a bot. Give your bot a name and a unique username (e.g., YourApprovalBot). Copy the API token that BotFather gives you. B. Add Your Bot to a Private Chat (with You) Find your bot in Telegram, click โStartโ to activate it. Send a test message (like โhelloโ) so the chat is created. C. Get Your User ID Search for โuserinfobotโ or use sites like userinfobot in Telegram. Type /start and it will reply with your Telegram user ID. OpenAI powers the LinkedIn post creation Add Your OpenAI API Key: Log in to your OpenAI Platform account: https://platform.openai.com/. Go to API keys and create a new secret key. In n8n, create a new "OpenAI API" credential and paste your API key. Give it a name. Apply Credential to Nodes: OpenAI Message Node Connect your LinkedIn account to the Linked in Node Select your account from the LinkedIn Dropdown box.
by Einar Cรฉsar Santos
๐ง Long-Term Memory System for AI Agents with Vector Database Transform your AI assistants into intelligent agents with persistent memory capabilities. This production-ready workflow implements a sophisticated long-term memory system using vector databases, enabling AI agents to remember conversations, user preferences, and contextual information across unlimited sessions. ๐ฏ What This Template Does This workflow creates an AI assistant that never forgets. Unlike traditional chatbots that lose context after each session, this implementation uses vector database technology to store and retrieve conversation history semantically, providing truly persistent memory for your AI agents. ๐ Key Features Persistent Context Storage**: Automatically stores all conversations in a vector database for permanent retrieval Semantic Memory Search**: Uses advanced embedding models to find relevant past interactions based on meaning, not just keywords Intelligent Reranking**: Employs Cohere's reranking model to ensure the most relevant memories are used for context Structured Data Management**: Formats and stores conversations with metadata for optimal retrieval Scalable Architecture**: Handles unlimited conversations and users with consistent performance No Context Window Limitations**: Effectively bypasses LLM token limits through intelligent retrieval ๐ก Use Cases Customer Support Bots**: Remember customer history, preferences, and previous issues Personal AI Assistants**: Maintain user preferences and conversation continuity over months or years Knowledge Management Systems**: Build accumulated knowledge bases from user interactions Educational Tutors**: Track student progress and adapt teaching based on history Enterprise Chatbots**: Maintain context across departments and long-term projects ๐ ๏ธ How It Works User Input: Receives messages through n8n's chat interface Memory Retrieval: Searches vector database for relevant past conversations Context Integration: AI agent uses retrieved memories to generate contextual responses Response Generation: Creates informed responses based on historical context Memory Storage: Stores new conversation data for future retrieval ๐ Requirements OpenAI API Key**: For embeddings and chat completions Qdrant Instance**: Cloud or self-hosted vector database Cohere API Key**: Optional, for enhanced retrieval accuracy n8n Instance**: Version 1.0+ with LangChain nodes ๐ Quick Setup Import this workflow into your n8n instance Configure credentials for OpenAI, Qdrant, and Cohere Create a Qdrant collection named 'ltm' with 1024 dimensions Activate the workflow and start chatting! ๐ Performance Metrics Response Time**: 2-3 seconds average Memory Recall Accuracy**: 95%+ Token Usage**: 50-70% reduction compared to full context inclusion Scalability**: Tested with 100k+ stored conversations ๐ฐ Cost Optimization Uses GPT-4o-mini for optimal cost/performance balance Implements efficient chunking strategies to minimize embedding costs Reranking can be disabled to save on Cohere API costs Average cost: ~$0.01 per conversation ๐ Learn More For a detailed explanation of the architecture and implementation details, check out the comprehensive guide: Long-Term Memory for LLMs using Vector Store - A Practical Approach with n8n and Qdrant ๐ค Support Documentation**: Full setup guide in the article above Community**: Share your experiences and get help in n8n community forums Issues**: Report bugs or request features on the workflow page Tags: #AI #LangChain #VectorDatabase #LongTermMemory #RAG #OpenAI #Qdrant #ChatBot #MemorySystem #ArtificialIntelligence
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for Recipe Recommendation Engine with Bright Data MCP & OpenAI is a powerful automated workflow combines Bright Data's MCP for scraping trending or regional recipe data with OpenAI 4o mini to generate personalized recipe recommendations. This automated workflow is designed for: Food Bloggers & Culinary Creators : Who want to automate the extraction and curation of recipes from across the web to generate content, compile cookbooks, or publish newsletters. Nutritionists & Health Coaches : Who need structured recipe data to analyze ingredients, calories, and nutrition for personalized meal planning or dietary tracking. AI/ML Engineers & Data Scientists : Building models that classify cuisines, predict recipes from ingredients, or generate dynamic meal suggestions using clean, structured datasets. Grocery & Meal Kit Platforms : Who aim to extract recipes to power recommendation engines, ingredient lists, or personalized meal plans. Recipe Aggregator Startups : Looking to scale recipe data collection, filtering, and standardization across diverse cooking websites with minimal human intervention. Developers Integrating Cooking Features : Into apps or digital assistants that offer recipe recommendations, step-by-step cooking instructions, or nutritional insights. What problem is this workflow solving? This workflow solves: Automated recipe data extraction from any public URL AI-driven structured data extraction Scalable looped crawling and processing Real-time notifications and data persistence What this workflow does 1. Set Recipe Extract URL Configure the recipe website URL in the input node Set your Bright Data zone name and authentication 2. Paginated Data Extract Triggers a paginated extraction across multiple pages (recipe listing, index, or search pages) Returns a list of recipe links for processing 3. Loop Over Items Loops through the array of recipe links Each link is passed individually to the scraping engine 4. Bright Data MCP Client (Per Recipe) Scrapes each individual recipe page using scrape_as_html Smartly bypasses common anti-bot protections via Bright Data Web Unlocker 5. Structured Recipe Data Extract (via OpenAI GPT-4o mini) Converts raw HTML to clean text using an LLM preprocessing node Uses OpenAI GPT-4o mini to extract structured data 6. Webhook Notification Pushes the structured recipe data to your configured webhook endpoint Format: JSON payload, ideal for Slack, internal APIs, or dashboards 7. Save Response to Disk Saves the structured recipe JSON information to local file system Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the OpenAi account credentials. Make sure to set the fields as part of Set the Recipe Extract URL. Remember to set the webhook_url to send a webhook notification of recipe response. Set the desired local path in the Write the structured content to disk node to save the recipe response. How to customize this workflow to your needs You can tailor the Recipe Recommendation Engine workflow to better fit your specific use case by modifying the following key components: 1. Input Fields Node Update the Recipe URL to target specific cuisine sites or recipe types (e.g., vegan, keto, regional dishes). 2. LLM Configuration Swap out the OpenAI GPT-4o mini model with another provider (like Google Gemini) if you prefer. Modify the structured data prompt to extract custom fields that you wish. 3. Webhook Notification Configure the Webhook Notification node to point to your preferred integration (e.g., Slack, Discord, internal APIs). 4. Storage Destination Change the Save to Disk node to store the structured recipe data in: A cloud bucket (S3, GCS, Azure Blob etc.) A database (MongoDB, PostgreSQL, Firestore) Google Sheets or Airtable for spreadsheet-style access.
by Trung Tran
๐ Smart Vendor Contract Renewal & Reminder Workflow With GPT 4.1 mini Never miss a vendor renewal again! This smart workflow automatically tracks expiring contracts, reminds your finance team via Slack, and helps initiate renewal with vendors through email โ all with built-in approval and logging. Perfect for managing both auto-renew and manual contracts. ๐ Whoโs it for This workflow is designed for Finance and Procurement teams responsible for managing vendor/service contracts. It ensures timely notifications for expiring contracts and automates the initiation of renewal conversations with vendors. โ๏ธ How it works / What it does โฐ Daily Trigger Runs every day at 6:00 AM using a scheduler. ๐ Retrieve Contract List Reads vendor contract data from a Google Sheet (or any data source). Filters for contracts nearing their end date, using a Notice Period (days) field. ๐ Branch Based on Renewal Type Auto-Renew Contracts: Compose a Slack message summarizing the auto-renewal. Notify the finance contact via Slack. Manual Renewal Contracts: Use an OpenAI-powered agent to generate a meaningful Slack message. Send message and wait for approval from the finance contact (e.g., within 8 hours). Upon approval, generate a formal HTML email to the vendor. Send the email to initiate the contract extension process. ๐ (Optional) Logging Can be extended to log all actions (Slack messages, emails, approvals) to Google Sheets or other databases. ๐ ๏ธ How to set up Prepare your Google Sheet Include the following fields: Vendor Name, Vendor Email, Service Type, Contract Start Date, Contract End Date, Notice Period (days), Renewal Type, Finance Contact, Contact Email, Slack ID, Contract Value, Notes. Sample: https://docs.google.com/spreadsheets/d/1zdDgKyL0sY54By57Yz4dNokQC_oIbVxcCKeWJ6PADBM/edit?usp=sharing Configure Integrations ๐ข Google Sheets API: To read contract data. ๐ต Slack API: To notify and wait for approval. ๐ง OpenAI API (GPT-4): To generate personalized reminders. โ๏ธ Email (SMTP/Gmail): To send emails to vendors. Set the Daily Scheduler Use a Cron node to trigger the workflow at 6:00 AM daily. โ Requirements | Component | Required | |----------------------------------|----------| | Google Sheets API | โ | | Slack API | โ | | OpenAI API (GPT-4) | โ | | Email (SMTP/Gmail) | โ | | n8n (Self-hosted or Cloud) | โ | | Contract Sheet with proper schema| โ | ๐งฉ How to customize the workflow Adjust Reminder Period**: Modify the logic in the Find Expiring Vendors node (based on Contract End Date and Notice Period). Change Message Tone or Format**: Customize the OpenAI agent's prompt or switch from plain text to branded HTML email. Add Logging or Tracking: Add a node to append logs to a **Google Sheet, Notion, or database. Replace Data Source: Swap out Google Sheets for **Airtable, PostgreSQL, or other CRM/database systems. Adjust Wait/Approval Duration**: Modify the sendAndWait Slack node timeout (e.g., from 8 hours to 2 hours). ๐ฆ Optional Extensions ๐งพ Add PDF contract preview via Drive link ๐ง Use GPT to summarize renewal terms ๐ Auto-create Jira task for contract review