by Alex Kim
Automatically convert documents from Google Drive into vector embeddings using OpenAI, LangChain, and PGVector ā fully automated through n8n. āļø What It Does This workflow monitors a Google Drive folder for new files, supports multiple file types (PDF, TXT, JSON), and processes them into vector embeddings using OpenAIās text-embedding-3-small model. These embeddings are stored in a Postgres database using the PGVector extension, making them query-ready for semantic search or RAG-based AI agents. After successful processing, files are moved to a separate āvectorizedā folder to avoid duplication. š” Use Cases Powering Retrieval-Augmented Generation (RAG) AI agents Semantic search across private documents AI assistant knowledge ingestion Automated document pipelines for indexing or classification š§ Workflow Highlights Trigger Options:** Manual or Scheduled (3 AM daily by default) Supported File Types:** PDF, TXT, JSON Embedding Stack:** LangChain Text Splitter, OpenAI Embeddings, PGVector Deduplication:** Files are moved after processing License:** CC BY-SA 4.0 Author:** AlexK1919 š What Youāll Need Google Drive OAuth2** credentials (connected to Search Folder, Download File, and Move File nodes) OpenAI API Key** (used in the Embeddings OpenAI node) Postgres + PGVector** database (connected in the Postgres PGVector Store node) š§ Step-by-Step Setup Instructions Create Google OAuth2 credentials in n8n and connect them to all Google Drive nodes. Set your source folder ID in the Search Folder node ā this is where incoming files are placed. Set your processed folder ID in the Move File node ā files will be moved here after vectorization. Ensure you have a PGVector-enabled Postgres instance and input the table name and collection in the Postgres PGVector Store node. Add your OpenAI credentials to the Embeddings OpenAI node and select text-embedding-3-small. Optional: Activate the Schedule Trigger node to run daily or configure your own schedule. Run manually by triggering When clicking āTest workflowā for on-demand ingestion. š§© Customization Tips Want to support more file types or enhance the pipeline? Add new extractors**: Use Extract from File with other formats like DOCX, Markdown, or HTML. Refine logic by file type**: The Switch node routes files to the correct extraction method based on MIME type (application/pdf, text/plain, application/json). Pre-process with OCR**: Add an OCR step before extraction to handle scanned PDFs or images. Add filters**: Enhance the Search Folder or Switch node logic to skip specific files or folders. š License This workflow is available under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. You are free to use, adapt, and share this workflow for non-commercial purposes under the terms of this license. Full license details: https://creativecommons.org/licenses/by-nc-sa/4.0/
by Ranjan Dailata
Who this is for? Extract & Summarize Indeed Company Info is an automated workflow that extracts the Indeed company profile information using Bright Data Web Unlocker, transform it using Google Geminiās LLM, and forward the transformed response with the summary to a specified webhook for downstream use. This workflow is tailored for: Recruiters and HR teams looking to assess companies quickly during talent sourcing. Job seekers researching potential employers and needing summarized company insights. Market researchers and analysts monitoring competitor or industry players. What problem is this workflow solving? Searching and evaluating company profiles on Indeed manually can be time-consuming and inefficient, especially when dealing with large volumes of companies. Manually browsing, copying, and summarizing company descriptions, reviews, and ratings from Indeed hinders productivity and limits real-time insights. This workflow solves this by: Automating the extraction of company details from Indeed using Bright Data Web Unlocker. Summarizing the raw data using Google Gemini's language model for a quick, human-readable overview. Sending the transformed response with the summary to a chosen endpoint, like Slack, Notion, Airtable, or a custom webhook. What this workflow does This automated pipeline does the following: Scrape Indeed company profile pages (e.g., ratings, description, reviews) using Bright Dataās Web Unlocker. Transform the scraped content into structured JSON using n8nās built-in tools. Summarize and extract meaningful insights using Google Gemini's large language model. Forward the summarized data to a specified webhook or app for real-time access, storage, or analysis. Forward the formatted response to a specified webhook or app for real-time access, storage, or analysis. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the search query, Bright Data zone by navigating to the Set Indeed Search Query node. Update the Webhook Notifier with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a company or a market researcher, entrepreneur, or data analyst. Hereās how you can adapt it to fit your specific use case: Changing the data source**: Replace the Indeed search input with other job or business listing platforms if needed (e.g., Glassdoor, Crunchbase) Refining the LLM prompt**: Tailor the Gemini prompt to transform or summarize the Indeed company information in a specific format. Routing the output to different destinations**: Send summaries or transformed response to Google Sheets, Airtable, or CRMs like HubSpot or Salesforce etc.
by Mario
Dynamically switch between LLMs for AI Agents using LangChain Code Purpose This example workflow demonstrates a way to connect multiple LLMs to a single AI Agent/LangChain Node and programmatically use one ā or in this case loop through them. What it does This AI workflow takes in customer complaints and generates a response that is being validated before returned. If the answer was not satisfactory, the response will be generated again with a more capable model. How it works A LangChain Code Node allows multiple LLMs to be connected to a single Basic LLM Chain. On every call only one LLM is actually being connected to the Basic LLM Chain, which is determined by the index defined in a previous Node. The AI output is later validated by a Sentiment Analysis Node If the result was not satisfactory, it loops back to the beginning and executes the same query with the next available LLM The loop ends either when the result passed the requirements or when all LLMs have been used before. Setup Clone the workflow and select the belonging credentials. You'll need an OpenAI Account, alternatively you can swap the LLM nodes with ones from a different provider like Anthropic after the import. How to use Beware that the order of the used LLMs is determined by the order they have been added to the workflow, not by the position on the canvas. After cloning this workflow into your environment, open the chat and send this example message: > I really love waiting two weeks just to get a keyboard that doesnāt even work. Great job. Any chance I could actually use the thing I paid for sometime this month? Most likely you will see that the first validation fails, causing it to loop back to the generation node and try again with the next available LLM. Since AI responses are unpredictable, the results and number of tries will differ for each run. Disclaimer Please note, that this workflow can only run on self-hosted n8n instances, since it requires the LangChain Code Node.
by Sean Lon
Target Audience You will find this workflow or template perfect if you are in the internal talent acquisition teams, recruitment agencies, HR professionals, and hiring managers seeking to bulk automate the initial screening of CVs and resumes. Eg. Automatically get result of candidate who has been shortlisted/rejected with its rationale and score automatically. By eliminating manual evaluation and screening, you get smart AI-Agent helping you to have standardized efficient, and scalable solution for handling large volumes of applications. With bulk automation, you can focus strategic decision-making rather than tedious screening tasks, ensuring a faster, more accurate, and fair hiring process. Key focus This workflow focusses on having a more organized file-folder management, trackable candidate cv, maintainable job description, autonomous ai-agent. Organized Folder-File Structure ā CVs are automatically categorized based on their status, ensuring a structured workflow and easy retrieval Candidate Tracker ā A real-time tracking system records the state of each CV, allowing recruiters to monitor the shortlisted, rejected, or KIV (Keep in View) candidates. AI Agent for Decision Automation ā The AI autonomously orchestrates screening decisions, replacing manual LLM configurations with dynamic AI-driven evaluations for scalability and accuracy. Maintainable Job Description Management ā A structured job description file ensures continuous updates, keeping hiring criteria flexible and aligned with recruitment needs. Email Notifications ā The system automatically sends receipt confirmations upon processing completion, providing timely updates to recruiters. Features - Workflow Automated Resume Screening Workflow This workflow leverages Groq Llama4 for intelligent resume analysis, speeding the screening process by generating a matching score, result (shortlisted/rejected/kiv), and key insights/rationale into their suitability for provided job description. Step-by-Step Process: Monitors Google Drive:** Listens and checks for new resume cv in google drive . Retrieve Resume:** Downloads the CV resumes from google drive . Extract Resume Data:* Extract *text content** from CV resume PDF files Extract Job Description Data:* Extract *text content** from job description Analyze with Groq:** Generate a matching score based on job requirements. [SCORE: 1-10] Provide decision into their job suitability. [SHORTLISTED/REJECTED/KIV] Provide actionable insights into their job suitability. [REASON] This ensures a fast, efficient, and accurate screening process, eliminating manual evaluation. Setup Guide Step-by-Step Instructions Ensure all credentials are ready and setup (groq, gdrive ,gmail, gsheet, gdoc) View official n8n documentation on node setup accordingly. See also the notes of setup . Folder & File Setup 1. Create a google-drive folder like this View directory example 2. Create a job description like this View file example 3. Configure a tracker like this ( Candidate Name, AI Score,AI Verdict, AI Reason) View file example email conversations report as you like. You are ready to go!
by Ranjan Dailata
Who this is for? The Automate Etsy Data Mining with Bright Data Scrape & Google Gemini workflow is designed for eCommerce analysts, product researchers, and AI developers seeking to extract actionable insights from Etsy listings at scale. It is ideal for: eCommerce Entrepreneurs** - Researching product demand and competition. Market Analysts** - Tracking pricing, reviews, and trends across Etsy categories. Product Managers** - Identifying niche opportunities and design inspirations. Data Scientists & AI Engineers** - Automating product intelligence pipelines. Growth Hackers** - Leveraging Etsy insights to refine product-market fit. What problem is this workflow solving? Manually browsing Etsy to analyze product listings, pricing, reviews, and seller activity is slow, inconsistent, and unscalable. Scraping Etsy requires unlocking JavaScript-heavy content and structuring noisy data for analysis. This workflow solves: Automated and scalable scraping of Etsy product listings using Bright Dataās infrastructure. A fully paginated data structured Estry production data extraction via the Google Gemini LLM. Enables faster decision-making for product research and competitive analysis via the fully automated paginated data extraction. What this workflow does Receives input: Sets the Esty URL for the data extraction and analysis. Uses Bright Data's Web Unlocker to extract content from relevant sites. Cleans and preprocesses the scraped content for readability. Sends the content to Google Gemini for: Enriched results including: Data persistence over the disk. Sends the response to a target system via Webhook notification. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Set Esty Search Query for setting the brand content URL and the Bright Data Zone name. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Input Sources** : Replace the static URL with dynamic input from Google Sheets, Webhook, or Airtable to research multiple niches. Prompt Customization** : Adjust Gemini prompts to extract specific insights for example: List key features of the product Summarization of the review themes Data Output Options** : Update the Webhook notification to save data to: Google Sheets Notion or Airtable SQL/NoSQL Slack/Email
by mariskarthick
QuantumDefender AI is a next-generation intelligent cybersecurity assistant designed to harness the symbolic strength of quantum computingās promise alongside cutting-edge AI capabilities. This sophisticated agent empowers SOC analysts, red teamers, and security researchers with rapid threat investigation, operational automation, and intelligent command executionāall driven by GPT-4 and integrated tools, accessible through Telegram or on any medium. š Key Features: Expert-Level Cybersecurity Research & Analysis: Leverages powerful AI models to deliver clean, detailed, domain-specific insights across detection, remediation, and offensive security. Command & Control: Executes Linux shell commands, autonomous scripts, and system operations securely in isolated environments. Real-Time Web Intelligence: Utilizes integrated Langsearch API to provide timely internet research with contextual relevance. Calendar & Scheduling Automation: Manage Google Calendar events or any similar application(create, update, delete, retrieve) dynamically from chat. Multi-Tool Orchestration: Combines calculator functions, internet searches, command execution, and messaging for comprehensive operational support. Telegram-native Chatbot: Delivers an adaptive, memory-informed, and interactive conversational experience with immediate typing indicators and high responsiveness. Conversation & Session Management: Maintains context-aware, session-based memory to enable smooth, multi-turn dialogues with individual users. Sends ātypingā¦ā indicators during processing to ensure an interactive, user-friendly chat experience. Operates exclusively within Telegram, delivering rich, timely responses and leveraging all Telegram bot capabilities. Execution Intelligence & Safety: Fully autonomous in deciding which tools to invoke, how frequently, and in what sequence to fulfill user requests comprehensively and responsibly. Operates within a secure temporary folder environment to contain all command executions safely and avoid persistent or harmful side effects. Enforces strict safety protocols to avoid running malicious or destructive commands, maintaining ethical standards and compliance. Use Cases: Cybersecurity researchers and operators seeking an intelligent assistant to accelerate investigations and automate routine tasks. Red team professionals requiring on-the-fly command execution and information gathering integrated with tactical chat interactions. SOC teams aiming to augment their alert triage and incident handling workflows with AI-powered analysis and action. Anyone looking for a robust multi-tool AI chatbot integrated with real-world operational capabilities. Setup Requirements: OpenAI API key for GPT-4.1-nano language processing. Telegram Bot API credentials with proper webhook setup to receive and respond to messages. Google OAuth credentials for Calendar integration if calendar features are used. SSH access credentials for executing commands on remote hosts, if remote execution is enabled. Internet connectivity for the Langsearch web search API. Customization & Extensibility: The workflow is built modularly with n8nās flexible node system. Users can extend it by adding more tools, integrating other services (ticketing, threat intel, scanning tools), or modifying interaction logic to suit specialized operational needs and environments. Created by Mariskarthick M Senior Security Analyst | Detection Engineer | Threat Hunter | Open-Source Enthusiast
by Davi Saranszky Mesquita
Log errors and avoid sending too many emails Use case Most of the time, itās necessary to log all errors that occur. However, in some cases, a scheduled task or service consuming excessive resources might trigger a surge of errors. To address this, we can log all errors but limit alerts to a maximum of one notification every 5 minutes. What this workflow does This workflow can be configured to receive error events, or you can integrate it before your own error-handling logic. If used as the primary error handler, note that this flow will only add a database log entry and take no further action. Youāll need to add your own alerts (e.g., email or push notifications). Below is an example of a notification setup I prefer to use. At the end, thereās an error cleanup option. This feature is particularly useful in development environments. If you already have an error-handling workflow, you can call this one as a sub-workflow. Its final steps include cleanup logic to reset the execution state and terminate the workflow. Setup Verify all Postgres nodes and credentials when using the 'Error Handling Sample' How to adjust it to your needs 1) You can set this workflow as a sub-workflow within your existing error-handling setup. 2) Alternatively, you can add the "Error Handling Sample" at the end of this workflow, which sends email and push notifications. Configuration Requirements: ā ļø You must create a database table for this to work! DDL of this sample: create table p1gq6ljdsam3x1m."N8Err" ( id serial primary key, created_at timestamp, updated_at timestamp, created_by varchar, updated_by varchar, nc_order numeric, title text, "URL" text, "Stack" text, json json, "Message" text, "LastNode" text ); alter table p1gq6ljdsam3x1m."N8Err" owner to postgres; create index "N8Err_order_idx" on p1gq6ljdsam3x1m."N8Err" (nc_order); by Davi Saranszky Mesquita https://www.linkedin.com/in/mesquitadavi/
by Krishna Kumar Eswaran
š§ Problem This Solves: For developers and creators, consistently posting quality content on LinkedIn can be time-consuming. This workflow automates the process by: Fetching the latest Dev.to articles Posting them to LinkedIn twice daily Preventing duplicates using Airtable Sending success alerts to Telegram This ensures you're always active on LinkedIn, with zero manual effort. š„ Who This Template Is For Developers who want to build their presence on LinkedIn Tech creators or solo founders looking to grow an audience Community/page managers who want regular, curated content Busy professionals aiming for consistent LinkedIn engagement without doing it manually āļø Workflow Breakdown This automation runs twice a day (9:00 AM and 7:00 PM) and performs the following steps: Fetches Dev.to articles based on a tag Checks Airtable to avoid reposting the same article Posts to LinkedIn if itās new Sends a Telegram message after posting successfully š§© Step-by-Step Setup Instructions ā 1. Airtable Configuration Create a new base in Airtable with just one table and one column: Table Name: PostedArticles Column: ArticleID (Single line text ā stores the unique ID of each Dev.to article posted) This column is used to track posted articles and prevent duplicates. š 2. Dev.to API Setup Use the following endpoint in the HTTP Request node: arduino Copy Edit https://dev.to/api/articles?tag=YOUR_TAG_HERE&per_page=10 Replace YOUR_TAG_HERE with a tag like android, webdev, ai, etc. š¬ 3. Telegram Bot Setup Open @BotFather in Telegram and create a new bot Save the bot token Get your chat ID using @userinfobot or via Telegram API Add a Telegram node in n8n using this token and chat ID This will notify you when a post is successfully published. š§¾ 4. LinkedIn Setup Create a LinkedIn Developer App Use OAuth2 to connect it in n8n Choose to post on either a user profile or a company page š§± 5. n8n Workflow Structure Hereās the basic structure of the workflow: Cron Node ā Triggers at 9:00 AM and 7:00 PM daily HTTP Request ā Fetches latest articles from Dev.to Airtable Search ā Checks if ArticleID already exists IF Node ā Filters new vs. already-posted articles LinkedIn Post ā Publishes new article Airtable Create ā Saves the new ArticleID Telegram Message ā Sends success confirmation š ļø Customization Tips Change the Dev.to tag in the API URL Modify LinkedIn post format (add hashtags, emojis, personal notes) Adjust posting times in the Cron node Use additional filters (e.g., only post articles with a cover image or certain word count)
by JPres
š„ Who Is This For? Content creators, marketing teams, and channel managers who want a simple, handsāoff solution to upload videos and automatically generate optimized metadata from video transcripts. š What Problem Does This Solve? Manual video uploads with proper metadata creation is timeāconsuming and repetitive. This workflow fully automates: Monitoring a specific Google Drive folder for new video uploads Seamless YouTube upload processing Transcript extraction for context understanding AIāpowered generation of titles, descriptions, and tags Metadata application to uploaded videos without manual intervention š NodeābyāNode Breakdown | Step | Node Purpose | |------|---------------------------------------------------------------------| | 1 | New Video? (Trigger) ā Monitors specified Google Drive folder | | 2 | Download New Video ā Retrieves the video file from Google Drive | | 3 | Upload to YouTube ā Uploads the video to YouTube with initial settings | | 4 | Get Transcript ā Extracts transcript from the uploaded video | | 5 | Adjust Transcript Format ā Formats raw transcript for processing | | 6 | Create Description ā Generates SEOāoptimized description | | 7 | YT Tags (Message Model) ā Creates relevant tags based on content | | 8 | YT Title (Message Model) ā Generates compelling title | | 9 | Define File Path Upload Format (Optional) ā Structures data paths | | 10 | Update Videoās Metadata ā Applies generated title, description, tags| āļø Preāconditions / Requirements n8n with Google Drive and YouTube API credentials configured (stored as n8n credentials/variables; no hardācoded IDs) Dedicated Google Drive folder for video uploads YouTube channel with proper upload permissions AI service access for transcript processing and metadata generation Sufficient storage for temporary video handling āļø Setup Instructions Import this workflow into your n8n instance. Configure Google Drive credentials; reference folder ID via n8n variable (do not hardācode). Set up YouTube API credentials with upload and edit permissions. Specify the target Google Drive folder ID in the New Video? trigger node (via variable). Configure AI service credentials for transcript and metadata generation. Adjust message templates for title, description, and tag creation. Test with a small video file before production use. šØ How to Customize Modify AI prompts to match your channelās tone and style. Add conditional logic based on video categories or naming conventions. Implement notification systems to alert when uploads complete. Create custom metadata templates for different content types. Include timestamps or chapter markers based on transcript analysis. Add social media sharing nodes to announce new uploads. ā ļø Important Notes Video quality is preserved through the upload process. Consider YouTube API quotas when handling multiple uploads. Transcript quality affects metadata generation results. Videos are initially uploaded without visibility adjustments. Processing time depends on video length and transcript complexity. š Security and Privacy Store API credentials and folder IDs as n8n Credentials/Variablesāremove any hardācoded tokens or IDs. Video files are processed temporarily and not stored permanently. Limit Google Drive folder access to authorized users only. Manage YouTube upload permissions carefully (use OAuth/service accounts). Ensure compliance with organizational dataāhandling policies.
by Krishna Kumar Eswaran
š§ Problem This Solves Manually sharing Medium articles to LinkedIn daily can be repetitive and time-consuming. This automation: Fetches the latest Medium articles based on a tag (e.g., android) Posts them on LinkedIn twice daily Uses Airtable to prevent duplicates Sends a confirmation to Telegram once posted Stay consistently active on LinkedIn without lifting a finger. š„ Who This Template Is For Developers who write or follow Medium content Tech creators or founders looking to grow an audience Community or page managers needing regular curated posts Busy professionals who want hands-free LinkedIn engagement āļø Workflow Breakdown This automation runs at 9:00 AM and 7:00 PM daily and performs these steps: Fetch articles from MediumAPI.com by tag Check Airtable to prevent reposting the same article Post on LinkedIn if itās new Store the article ID in Airtable Send a Telegram message after successful posting š§¾ Step-by-Step Setup Instructions ā 1. Airtable Configuration Create a base with: Table Name: PostedArticles Column: ArticleID (Single line text ā to track posted articles) š 2. MediumAPI Setup Go to https://mediumapi.com Sign up and generate your API key from the dashboard Use this API endpoint in an HTTP node: GET https://mediumapi.com/api/tag/YOUR_TAG/latest Headers: Authorization: Bearer YOUR_API_KEY Replace YOUR_TAG with a topic like android, ai, webdev, etc. š¬ 3. Telegram Bot Setup Go to @BotFather and create a new bot Save the bot token Use @userinfobot to get your Telegram chat ID Add a Telegram node in n8n with the token + chat ID š 4. LinkedIn Setup Create a LinkedIn Developer App Connect it via OAuth2 in n8n Choose to post on your profile or company page š§± 5. n8n Workflow Structure Node Type Description Cron Triggers the flow twice a day HTTP Request Fetches articles from MediumAPI.com Airtable Search Checks if article ID already exists IF Node Skips duplicates LinkedIn Post Publishes to your LinkedIn profile/page Airtable Create Stores posted article ID Telegram Node Sends success notification š ļø Customization Tips Change the tag in the API URL to match your niche Add hashtags or personal comments to the LinkedIn message Schedule different posting times in the Cron node Filter Medium posts based on length or title keywords (optional)
by Don Jayamaha Jr
š Detect key candlestick reversal patterns and volume divergence on Tesla (TSLA) using GPT-4.1 and real-time OHLCV data. This AI agent evaluates 1-hour and 1-day candles and is an essential part of the Tesla Financial Market Data Analyst Tool. It identifies signals like Doji, Engulfing, Hammer, and volume anomalies to support trade entry and exit logic. ā ļø Not a standalone template ā must be triggered by the Tesla Financial Market Data Analyst Tool š Requires: Alpha Vantage Premium API Key OpenAI GPT-4.1 access š What This Agent Does Calls Alpha Vantage to fetch: š 1-hour OHLCV data š 1-day OHLCV data GPT-4.1 evaluates: š Candlestick patterns like Doji, Engulfing, Shooting Star š Volume divergence (price/volume inconsistency) Returns a structured JSON output like: { "summary": "Bearish signs detected on 1-day chart. A shooting star formed on high volume while RSI is elevated. Volume divergence seen on 1h chart as price rises but volume weakens.", "candlestickPatterns": { "1h": "None", "1d": "Shooting Star" }, "volumeDivergence": { "1h": "Bearish", "1d": "None" }, "ohlcv": { "1h": { "close": 174.1, "volume": 1430000, "high": 175.0, "low": 173.8 }, "1d": { "close": 188.3, "volume": 21234000, "high": 189.9, "low": 183.7 } } } š ļø Setup Instructions Import the Workflow Name it: Tesla_1hour_and_1day_Klines_Tool Install Dependencies ā Tesla Financial Market Data Analyst Tool (this is the trigger parent) Add Required Credentials Alpha Vantage Premium ā via HTTP Query Auth OpenAI GPT-4.1 ā via OpenAI credentials Verify Web Access This tool fetches data live from Alpha Vantage: /query?function=TIME_SERIES_INTRADAY&interval=60min /query?function=TIME_SERIES_DAILY Run via Execute Workflow Trigger This tool will activate only when called by the Financial Analyst Agent. Inputs: message (optional) sessionId (used for memory continuity) š§ Agent Architecture | Component | Description | | ----------------------- | --------------------------------------------------- | | Candlestick Data Hour | Fetches 60min TSLA candles via Alpha Vantage | | Candlestick Data Day | Fetches daily TSLA candles via Alpha Vantage | | OpenAI Chat Model | GPT-4.1 reasoning engine for pattern detection | | Simple Memory | Maintains short-term logic context | | Tesla Klines Agent | LangChain AI agent analyzing both candle and volume | š Sticky Notes Overview š Workflow Purpose š§ Short-Term Memory Notes š 1h/1d Data Fetch Logic š Candlestick Pattern Types Detected š Volume Divergence Definitions š¤ GPT-4.1 Prompt Configuration š Licensing & Support Ā© 2025 Treasurium Capital Limited Company Logic, pattern reasoning, and prompt structure are proprietary IP. š Don Jayamaha ā LinkedIn š n8n Creator Profile š Automate technical edge: detect TSLA candle reversals and volume anomalies with precision using GPT-4.1 and Alpha Vantage. Required by the Tesla Financial Market Data Analyst Tool.
by Naveen Choudhary
Who is this template for? Growth teams, SDRs, recruiters, or anyone whoāÆroutinely hunts for hardātoāfind business emails and would rather spend time reaching out than guessing formats. What problem does this workflow solve? Manually piecing together email patterns, crossāchecking them in a verifier, and updating a tracking sheet is slow and errorāprone. This template automates theāÆentire loopāresearch, guess, verify, and logāso you hit Start and watch rows fill up with readyātoāsend addresses. What this workflow does Pull fresh leads ā Grabs only the rows in your GoogleāÆSheet where StatusāÆ=āÆFALSE. Find the company pattern ā Queries Serper.dev for snippets and feeds them to GeminiāÆFlash (via OpenRouter) to spot the dominant email format. Build the address ā Constructs a likely email for every first/last name. Verify in real time ā Pings Prospeo by default (API) or lets you bulkāclean in Sparkle.io. Write it back ā Updates the sheet with pattern, email, confidence, verification status, and flips Status toāÆTRUE. Loop until done ā Runs batchābyābatch so you never hit API limits. š Work freeātier magic (up to \~2,500 contacts/month) | Service | Free allowance | How this template uses it | | -------------- | ----------------------------- | ------------------------------------------------------------------------------------ | | Serper.dev | 2,500 searches/mo | Scrapes three public email snippets per domain to learn the pattern | | Sparkle.io | 10,000 bulk verifications/day | Manual uploadādownload optionāperfect to clean your first 2.5k emails at zero cost | | Prospeo | 75 API calls/mo | Builtāin if you prefer fully automated verification | Quick Sparkle workflow: Let the template generate emails. Export the āEmailā column toāÆCSV ā upload to Sparkle.io. Download the results and paste the "verification\_status" back into the sheet (or add a small n8n import subāflow). Setup (5āÆminutes) Copy the GoogleāÆSheet linked in the sticky note and paste its ID into the Get Rows and Update Rows nodes. Add credentials for GoogleāÆSheets, Serper (XāAPIāKEY), OpenRouter, and optionally Prospeo. Hit Execute Workflowāthatās it. How to customise Prefer Sparkle for volume:** Skip the Prospeo node, export emails in one click, bulkāverify in Sparkle, and reāimport results. Swap the search source:* Replace the *Get Email Pattern HTTP node with Bing, Brave, etc. Extend enrichment:* Add phone lookāups or LinkedIn scrapers before the *Update Rows node. Autoārun:** Replace the Manual Trigger with a Cron node so the sheet cleans itself every morning. AdditionalāÆresources | Tool | Purpose | Link | | --------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------- | -------------------------------------------------------- | | Prospeo ā APIāready email verificationSpecial offer: 20āÆ% free credits for the firstāÆ3āÆmonths on any plan using this link! | Realātime, singleācall mailbox validation | prospeo.io | | Sparkle.io ā highāvolume bulk verifier (manual upload) | Free daily quota of 10āÆ000 verifications | app.sparkle.io/signāup | | OpenRouter ā API gateway for GeminiĀ Flash & other LLMs | One key unlocks multiple frontier models | openrouter.ai | | Serper.dev ā Google Search API | 2āÆ500 searches/month on the free tier | serper.dev | Add the relevant keys or signup details from these links, drop them into the matching n8n credentials, and youāre all set to enrich your first 2āÆ500 contacts at zero cost. Happy building!