by Aitor | 1Node
Template Description This template creates a powerful Retrieval Augmented Generation (RAG) AI agent workflow in n8n. It monitors a specified Google Drive folder for new PDF files, extracts their content, generates vector embeddings using Cohere, and stores these embeddings in a Milvus vector database. Subsequently, it enables a RAG agent that can retrieve relevant information from the Milvus database based on user queries and generate responses using OpenAI, enhanced by the retrieved context. Functionality The workflow automates the process of ingesting documents into a vector database for use with a RAG system. Watch New Files: Triggers when a new file (specifically targeting PDFs) is added to a designated Google Drive folder. Download New: Downloads the newly added file from Google Drive. Extract from File: Extracts text content from the downloaded PDF file. Default Data Loader / Set Chunks: Processes the extracted text, splitting it into manageable chunks for embedding. Embeddings Cohere: Generates vector embeddings for each text chunk using the Cohere API. Insert into Milvus: Inserts the generated vector embeddings and associated metadata into a Milvus vector database. When chat message received: Adapt the trigger tool to fit your needs. RAG Agent: Orchestrates the RAG process. Retrieve from Milvus: Queries the Milvus database with the user's chat query to find the most relevant chunks. Memory: Manages conversation history for the RAG agent to optimize cost and response speed. OpenAI / Cohere embeddings: Uses ChatGPT 4o for text generation. Requirements To use this template, you will need: An n8n instance (cloud or self-hosted). Access to a Google Drive account to monitor a folder. A Milvus instance or access to a Milvus cloud service like Zilliz. A Cohere API key for generating embeddings. An OpenAI API key for the RAG agent's text generation. Usage Set up the required credentials in n8n for Google Drive, Milvus, Cohere, and OpenAI. Configure the "Watch New Files" node to point to the Google Drive folder you want to monitor for PDFs. Ensure your Milvus instance is running and the target cluster is set up correctly. Activate the workflow. Add PDF files to the monitored Google Drive folder. The workflow will automatically process them and insert their embeddings into Milvus. Interact with the RAG agent. The agent will use the data in Milvus to provide context-aware answers. Benefits Automates document ingestion for RAG applications. Leverages Milvus for high-performance vector storage and search. Uses Cohere for generating high-quality text embeddings. Enables building a context-aware AI agent using your own documents. Suggested improvements Support for More File Types:** Extend the "Watch New Files" node and subsequent extraction steps to handle various document types (e.g., .docx, .txt, .csv, web pages) in addition to PDFs. Error Handling and Notifications:** Implement robust error handling for each step of the workflow (e.g., failed downloads, extraction errors, Milvus insertion failures) and add notification mechanisms (e.g., email, Slack) to alert the user. Get in touch with us Contact us at https://1node.ai
by DevCode Journey
Who is this for? This n8n workflow is designed for investors, financial analysts, automated trading system developers, and finance enthusiasts who require daily, comprehensive, data-driven insights into specific stock symbols. It's perfect for users who need to automate the complex process of combining technical indicators, news sentiment, professional analyst ratings, and social media buzz into a single, actionable recommendation. This system provides a 24/7 automated "analyst" for portfolio monitoring. What this Workflow Does This n8n workflow executes a daily, multi-faceted analysis of a target stock. It starts by gathering all relevant data (price history, news, ratings, social posts) and processes it through specialized Code nodes to calculate technical indicators (SMA, RSI), determine price predictions (Linear Regression), and perform sentiment analysis on news and social media. Finally, it uses a weighted model to synthesize all data into a single, comprehensive Buy/Sell/Hold recommendation and delivers a detailed report via Telegram. Key Features Daily Scheduling**: Automatically triggers analysis every day at a specified time (e.g., 9:00 AM). Multi-Factor Analysis: Combines **four key domains for a holistic view: Technical, Prediction, News Sentiment, Analyst Ratings, and Social Sentiment. Technical Indicator Calculation: Calculates **SMA (20, 50, 200), RSI (14-day), and identifies Support/Resistance levels. Price Prediction: Uses **Simple Linear Regression to forecast a 7-day price trend and generate an initial recommendation. Sentiment Analysis: Custom Code nodes perform **keyword-based sentiment analysis on news articles and social media posts. Composite Recommendation: A weighted model combines all analysis scores (35% Technical, 25% News, 25% Analyst, 15% Social) to generate a **final recommendation, confidence score, and summary. Automated Alerting: Delivers a fully formatted, easily readable **Markdown report via Telegram. Requirements API Configuration Node**: A preliminary node (implied by the expression references) containing: Target stockSymbols (e.g., TSLA, AAPL). telegramChatId for receiving the report. API Keys for data sources (e.g., a Financial Data API for price/news/ratings, a Social Media API). Telegram Credentials**: For the Telegram node to send the final message. Financial Data Source Workflow**: Requires preceding nodes (not fully visible) to fetch: Historical price data (required for SMA/RSI/Regression). Recent news headlines and summaries. Recent analyst ratings. Social media data (e.g., from Twitter/StockTwits). n8n Instance**: Self-hosted or cloud-based n8n installation. How to Use Step-by-Step Setup 1. Configure Scheduling Open the "Daily Stock Check" node. Set the interval rule to the precise hour you want the report to run (e.g., 9:00 AM). 2. Configure Stock Symbol and Telegram In the (implied) "API Configuration" node, set the stockSymbols you wish to track. Set the target telegramChatId where the report will be delivered. Ensure your Telegram credentials are set up in n8n. 3. Verify Data Fetching Nodes Ensure the nodes feeding data into "Analyze Stock Trends," "Analyze News Sentiment," "Process Analyst Ratings," and "Analyze Social Sentiment" are correctly configured to fetch the required historical price, news, ratings, and social data. 4. Adjust Analysis Weights (Advanced) If you wish to change the importance of different factors, edit the WEIGHTS object inside the "Generate Comprehensive Recommendation" Code node. Default Weights: Technical (0.35), News (0.25), Analyst (0.25), Social (0.15). 5. Test the Workflow Manually execute the workflow to ensure all Code nodes process the incoming data correctly and the "Send Telegram Alert" successfully delivers the final, formatted message. Workflow Components The workflow is structured into three main phases: Data Processing, Recommendation Synthesis, and Reporting. 1. Data Processing and Indicator Calculation | Node Name | Type | Key Functionality | | :--- | :--- | :--- | | Daily Stock Check | Schedule Trigger | Initiates the entire workflow daily at the set time. | | Analyze Stock Trends | Code | Calculates Technical Indicators: SMA (20, 50, 200), RSI (14-day), Volume Trend, and Support/Resistance levels. | | Predict Future Trends | Code | Performs Simple Linear Regression on historical prices to determine slope and predict the price 7 days ahead. | | Analyze News Sentiment | Code | Performs keyword-based sentiment analysis on news headlines and summaries to categorize overall sentiment (positive/negative/neutral) and assign a score. | | Process Analyst Ratings | Code | Aggregates analyst recommendations (Buy/Hold/Sell) to calculate consensus rating and average price target. | | Analyze Social Sentiment | Code | Performs keyword-based sentiment analysis on social media data to determine community mood and trending hashtags. | 2. Recommendation Synthesis | Node Name | Type | Description | | :--- | :--- | :--- | | Combine All Analysis | Merge | Consolidates the outputs from the four analysis branches (Technical, News, Analyst, Social) into a single data item. | | Generate Comprehensive Recommendation | Code | The core logic. Calculates a weighted composite score (from -100 to 100) based on all four inputs, generating the final STRONG BUY/BUY/HOLD/SELL/STRONG SELL recommendation and a numerical confidence score. | 3. Reporting and Alerting | Node Name | Type | Description | | :--- | :--- | :--- | | Format Telegram Message | Set | Constructs the final detailed report message using Markdown formatting, pulling data from all preceding analysis nodes into a clear, structured report. | | Send Telegram Alert | Telegram | Sends the fully formatted analysis report to the pre-configured Telegram chat ID. | 🙋 For Help & Community 👾 Discord: n8n channel 🌐 Website: devcodejourney.com 🔗 LinkedIn: Connect with Shakil 📱 WhatsApp Channel: Join Now 💬 Direct Chat: Message Now
by Antonio Trento
🤖 Auto-Publish SEO Blog Posts for Jekyll with AI + GitHub + Social Sharing This workflow automates the entire process of publishing SEO-optimized blog posts (e.g., recipes) to a Jekyll site hosted on GitHub. It uses LangChain + OpenAI to write long-form Markdown articles, and commits them directly to your repository. Optional steps include posting to X (Twitter) and LinkedIn. 🔧 Features 📅 Scheduled Execution: Runs daily or manually. 📥 CSV Input: Reads from a local CSV (/data/recipes.csv) with fields like title, description, keywords, and publish date. ✍️ AI Copywriting: Uses a GPT-4 model to generate a professional, structured blog post optimized for SEO in Markdown format. 🧪 Custom Prompting: Includes a detailed, structured prompt tailored for Italian food blogging and SEO rules. 🗂 Markdown Generation: Automatically builds the Jekyll front matter. Generates a clean SEO-friendly slug. Saves to _posts/YYYY-MM-DD-title.md. ✅ Commits to GitHub: Auto-commits new posts using GitHub node. 🧹 Post-Processing: Removes processed lines from the source CSV. 📣 (Optional) Social media sharing: Can post title to X (Twitter) and LinkedIn. 📁 CSV Format Example titolo;prompt_descrizione;keyword_principale;keyword_secondarie;data_pubblicazione Pasta alla Norma;Classic Sicilian eggplant pasta...;pasta alla norma;melanzane, ricotta salata;2025-07-04T08:00:00
by Eric
This is a specific use case. The ElevenLabs guide for Cal.com bookings is comprehensive but I was having trouble with the booking API request. So I built a simple workflow to validate the request and handle the booking creation. Who's this for? You have an ElevenLabs voice agent (or other external service) booking meetings in your Cal.com account and you want more control over the book_meeting tool called by the voice agent. How's it work? Request is received by the webhook trigger node Request sent from ElevenLabs voice agent, or other source Request body contains contact info for the user with whom a meeting will be booked in Cal.com Workflow validates input data for required fields in Cal.com If validation fails, a 400 bad request response is returned If valid, meeting is booked in Cal.com api How do I use this? Create a custom tool in the ElevenLabs agent setup, and connect it to the webhook trigger in this workflow. Add authorization for security. Instruct your voice agent to call this tool after it has collected the required information from the user. Expected input structure Note: Modify this according to your needs, but be sure to reflect your changes in all following nodes. Requirements here depend on required fields in your Cal.com event type. If you have multiple event types in Cal.com with varying required fields, you'll need to handle this in this workflow, and provide appropriate instructions in your *voice agent prompt*. "body": { "attendee_name": "Some Guy", "start": "2025-07-07T13:30:00Z", "attendee_phone": "+12125551234", "attendee_timezone": "America/New_York", "eventTypeId": 123456, "attendee_email": "someguy@example.com", "attendee_company": "Example Inc", "notes": "Discovery call to find synergies." } Modifications Note: ElevenLabs doesn't handle webhook response headers or body, and only recognizes the response code. In other words, if the workflow responds with 400 Bad request that's the only info the voice agent gets back; it doesn't get back any details, eg. "User email still needed". You can modify the structure of the expected webhook request body, and then you should reflect that structure change in all following nodes in the workflow. Ie. if you change attendee_name to attendeeFirstName and attendeeLastName then you need to make this change in the following nodes that use these properties. You can also require or make optional other user data for the Cal.com event type which would reduce or increase the data the voice agent must collect from the user. You can modify the authorization of this webhook to meet your security needs. ElevenLabs has some limitations and you should be mindful of those, but it also offers a secret feature with proves useful. An improvement to this workflow could include a GET request to a CRM or other db to get info on the user interacting with the voice agent. This could reduce some of the data collection needed from the voice agent, like if you already have the user's email address, for example. I believe you can also get the user's phone number if the voice agent is set up on a dial-in interface, so then the agent wouldn't need to ask for it. This all depends on your use case. A savvy step might be prompting the voice agent to get an email, and using the email in this workflow to pull enrichment data from Apollo.io or similar ;-)
by Jonathan
This workflow checks a Google Calendar at 8am on the first of each month to get anything that has been marked as a Holiday or Illness. It then merges the count for each person and sends an email with the list. To use this workflow you will need to set the credentials to use for the Google Calendar node and Send Email node. You will also need to select the calendar ID and fill out the information in the send email node. This workflow searches for Events that contain "Holiday" or "Illness" in the summary. If you want to change this you can modify it in the Switch node.
by Baptiste Fort
Still reminding people about their tasks manually every morning? Let’s be honest — who wants to start the day chasing teammates about what they need to do? What if Slack could do it for you — automatically, at 9 a.m. every day — without missing anything, and without you lifting a finger? In this tutorial, you’ll build a simple automation with n8n that checks Airtable for active tasks and sends reminders in Slack, daily. Here’s the flow you’ll build: Schedule Trigger → Search Records (Airtable) → Send Message (Slack) STEP 1 : Set up your Airtable base Create a new base called Tasks Add a table (for example: Projects, To-Do, or anything relevant) Add the following fields: | Field | Type | Example | | -------- | ----------------- | ------------------------------------------- | | Title | Text | Finalize quote for Client A | | Assignee | Text | Baptiste Fort | | Email | Email | claire@email.com | | Status | Single select | In Progress / Done | | Due Date | Date (dd/mm/yyyy) | 05/07/2025 | Add a few sample tasks with the status In Progress so you can test your workflow later. STEP 2 Create the trigger in n8n In n8n, add a Schedule Trigger node Set it to run every day at 9:00 a.m.: Trigger interval: Days Days Between Triggers: 1 Trigger at hour: 9 Trigger at minute: 0 This is the node that kicks off the workflow every morning. STEP 3 : Search for active tasks in Airtable This step is all about connecting n8n to your Airtable base and pulling the tasks that are still marked as "In Progress". 1. Add the Airtable node In your n8n workflow, add a node called: Airtable → Search Records You can find it by typing "airtable" in the node search. 2. Create your Airtable Personal Access Token If you haven’t already created your Airtable token, here’s how: 🔗 Go to: https://airtable.com/create/tokens Then: Name your token something like TACHES Under Scopes, check: ✅ data.records:read Under Access, select only the base you want to use (e.g. “Tâches”) Click “Save token” Copy the personal token 3. Set up the Airtable credentials in n8n In the Airtable node: Click on the Credentials field Select: Airtable Personal Access Token Click Create New Paste your token Give it a name like: My Airtable Token Click Save 4. Configure the node Now fill in the parameters: Base: Tâches Table: Produits (or Tâches, depending on what you called it) Operation: Search Filter By Formula: {Statut} = "En cours" Return All: ✅ Yes (make sure it’s enabled) Output Format: Simple 5. Test the node Click “Execute Node”. You should now see all tasks with Statut = "En cours" show up in the output (on the right-hand side of your screen), just like in your screenshot. STEP 4: Send each task to Slack Now that we’ve fetched all the active tasks from Airtable, let’s send them to Slack — one by one — using a loop. Add the Slack node Drag a new node into your n8n workflow and select: Slack → Message Name it something like Send Slack Message You can find it quickly by typing "Slack" into the node search bar. Connect your Slack account If you haven't already connected your Slack credentials: Go to n8n → Credentials Select Slack API Click Create new Paste your Slack Bot Token (from your Slack App OAuth settings) Give it a clear name like Slack Bot n8n Choose the workspace and save Then, in the Slack node, choose this credential from the dropdown. Configure the message Set these parameters: Operation: Send Send Message To: Channel Channel: your Slack channel (e.g. #tous-n8n) Message Type: Simple Text Message Message template Paste the following inside the Message Text field: Message template Paste the following inside the Message Text field: New task for {{ $json.name }}: {{ $json["Titre"] }} 👉 Deadline: {{ $json["Date limite"] }} Example output: New task for Jeremy: Relancer fournisseur X 👉 Deadline: 2025-07-04 Test it Click Execute Node to verify the message is correctly sent in Slack. If the formatting works, you’re ready to run it on schedule 🚀
by Jonathan
This workflow will check a mailbox for new emails and if the Subject contains Expenses or Reciept it will send the attachment to Mindee for processing then it will update a Google sheet with the values. To use this node you will need to set the Email Read node to use your mailboxes credentials and configure the Mindee and Google Sheets nodes to use your credentials.
by David w/ SimpleGrow
This n8n workflow tracks user engagement in a specific WhatsApp group by capturing incoming messages via a Whapi webhook. It first filters messages to ensure they come from the correct group, then identifies the message type—text, emoji reaction, voice, or image. The workflow searches for the user in an Airtable database using their WhatsApp ID and increments their message count by one. It updates the Airtable record with the new count and the date of the last interaction. This automated process helps measure user activity and supports engagement initiatives like weekly raffles or rewards. The system is flexible and can be expanded to include more message types or additional actions. Overall, it provides a seamless way to encourage and track user participation in your WhatsApp community.
by Agent Circle
This n8n template demonstrates how to use the tool to crawl comments from a YouTube video and simply get all the results in a linked Google Sheet. Use cases are many: Whether you're a YouTube creator trying to understand your audience, a marketer running sample analysis, a data analyst compiling engagement metrics, or part of a growth team tracking YouTube or social media campaign performance, this workflow helps you extract real, actionable insights from YouTube video comments at scale. How It Works The workflow starts when you manually click Test Workflow or Execute Workflow in N8N. It reads the list of YouTube video URLs from the Video URLs tab in the connected YouTube – Get Video Comments Google Sheet. Only the URLs marked with the Ready status will be processed. The tool loops through each video and sends an HTTP request to the YouTube API to fetch comment data. Then, it checks whether the request is successful before continuing. If comments are found, they are split and processed. Each comment is then inserted in the Results tab of the connected YouTube – Get Video Comments Google Sheet. Once a URL has been finished, its status in the Video URLs tab of the YouTube – Get Video Comments Google Sheet is updated to Finished. How To Use Download the workflow package. Import the workflow package into your N8N interface. Duplicate the "YouTube - Get Video Comments" Google Sheet template into your Google Sheets account. Set up Google Cloud Console credentials in the following nodes in N8N, ensuring enabled access and suitable rights to Google Sheets and YouTube services: For Google Sheets access, ensure each node is properly connected to the correct tab in your connected Google Sheet template: Node Google Sheets - Get Video URLs → connected to the Video URLs tab; Node Google Sheets - Insert/Update Comment → connected to the Results tab; Node Google Sheets - Update Status connected to the Video URLs tab. For YouTube access: Set up a GET method in Node HTTP Request - Get Comments. Open the template in your Google Sheets account. In the tab Video URLs, fill in the video URLs you want to crawl in Column B and update the status for each row in Column A to Ready. Return to the N8N interface and click Execute Workflow. Check the results in the Results tab of the template - the collected comments will appear there. Requirements Basic setup in Google Cloud Console (OAuth or API Key method enabled) with enabled access to YouTube and Google Sheets. How To Customize By default, the workflow is manually triggered in N8N. However, you can automate the process by adding a Google Sheets trigger that monitors new entries in your connected YouTube – Get Video Comments template and starts the workflow automatically. Need Help? Join our community on different platforms for support, inspiration and tips from others. Website: https://www.agentcircle.ai/ Etsy: https://www.etsy.com/shop/AgentCircle Gumroad: http://agentcircle.gumroad.com/ Discord Global: https://discord.gg/d8SkCzKwnP FB Page Global: https://www.facebook.com/agentcircle/ FB Group Global: https://www.facebook.com/groups/aiagentcircle/ X: https://x.com/agent_circle YouTube: https://www.youtube.com/@agentcircle LinkedIn: https://www.linkedin.com/company/agentcircle
by Dataki
This workflow serves as a solid foundation when you need an AI Agent to return output in a specific JSON schema, without relying on the often-unreliable Structured Output Parser. What It Does The example workflow takes a simple input (like a food item) and expects a JSON-formatted output containing its nutritional values. Why Use This Instead of Structured Output Parser? The built-in Structured Output Parser node is known to be unreliable when working with AI Agents. While the n8n documentation recommends using a “Basic LLM Chain” followed by a Structured Output Parser, this alternative workflow completely avoids using the Structured Output Parser node. Instead, it implements a custom loop that manually validates the AI Agent's output. This method has proven especially reliable with OpenAI's gpt-4.1 series (gpt-4.1, gpt-4.1-mini, gpt-4.1-nano), which tend to produce correctly structured JSON on the first try, as long as the System Prompt is well defined. In this template, gpt-4.1-nano is set by default. How It Works Instead of using the Structured Output Parser, this workflow loops the AI Agent through a manual schema validation process: A custom schema check is performed after the AI Agent response. A runIndex counter tracks the number of retries. A Switch node: If the output does not match the expected schema, it routes back to the AI Agent with an updated prompt asking it to return the correct format. The process allows up to 4 retries to avoid infinite loops. If the output does match the schema, it continues to a Set node that serves as chat response (you can customize this part to fit your use case). This approach ensures schema consistency, offers flexibility, and avoids the brittleness of the default parser.
by Jesse White
Automate High-Quality Voice with Google Text-to-Speech & n8n Effortlessly convert any text into stunningly realistic, high-quality audio with this powerful n8n workflow. Leveraging Google's advanced Text-to-Speech (TTS) AI, this template provides a complete, end-to-end solution for generating, storing, and tracking voiceovers automatically. Whether you're a content creator, marketer, or developer, this workflow saves you countless hours by transforming your text-based scripts into ready-to-use audio files. The entire process is initiated from a simple form, making it accessible for users of all technical levels. Features & Benefits 🗣️ Studio-Quality Voices: Leverage Google's cutting-edge AI to produce natural and expressive speech in a wide variety of voices and languages. 🚀 Fully Automated Pipeline: From text submission to final file storage, every step is handled automatically. Simply input your script and let the workflow do the rest. ☁️ Seamless Cloud Integration: Automatically uploads generated audio files to Google Drive for easy access and sharing. 📊 Organized Asset Management: Logs every generated audio file in an Airtable base, complete with the original script, a direct link to the file, and its duration. ⚙️ Simple & Customizable: The workflow is ready to use out-of-the-box but can be easily customized. Change the trigger, add notification steps, or integrate it with other services in your stack. Perfect For a Variety of Use Cases 🎬 Content Creators: Generate consistent voiceovers for YouTube videos, podcasts, and social media content without needing a microphone. 📈 Marketers: Create professional-sounding audio for advertisements, product demos, and corporate presentations quickly and efficiently. 🎓 Educators: Develop accessible e-learning materials, audiobooks, and language lessons with clear, high-quality narration. 💻 Developers: Integrate dynamic voice generation into applications, build interactive voice response (IVR) systems, or provide audio feedback for user actions. How The Workflow Operates Initiate with a Form: The process begins when you submit a script, a desired voice, and language through a simple n8n Form Trigger. Synthesize Speech: The workflow sends the text to Google's Text-to-Speech API, which generates the audio and returns it as a base64 encoded file. Process and Upload: The data is converted into a binary audio file and uploaded directly to a specified folder in your Google Drive. Enrich Metadata: The workflow then retrieves the audio file's duration using the fal.ai ffmpeg API, adding valuable metadata. Log Everything: Finally, it creates a new record in your Airtable base, storing the asset name, description (your script), content type, file URLs from Google Drive, and the audio duration for perfect organization. What You'll Need To use this workflow, you will need active accounts for the following services: Google Cloud oAuth2 Client Credentials:** With the Text-to-Speech API enabled. Google Drive:** For audio file storage. Airtable:** For logging and asset management. fal.ai:** For the ffmpeg API used to get audio duration.
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically performs weekly keyword research and competitor analysis to discover trending keywords in your industry. It saves you time by eliminating the need to manually research keywords and provides a constantly updated database of trending search terms and opportunities. Overview This workflow automatically researches trending keywords for any specified topic or industry using AI-powered search capabilities. It runs weekly to gather fresh keyword data, analyzes search trends, and saves the results to Google Sheets for easy access and analysis. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For accessing search engines and keyword data sources OpenAI**: AI agent for intelligent keyword research and analysis Google Sheets**: For storing and organizing keyword research data How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your keyword tracking spreadsheet Customize: Define your target topics or competitors for keyword research Use Cases SEO Teams**: Discover new keyword opportunities and track trending search terms Content Marketing**: Find trending topics for content creation and strategy PPC Teams**: Identify new keywords for paid advertising campaigns Competitive Analysis**: Monitor competitor keyword strategies and market trends Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #keywordresearch #seo #brightdata #webscraping #competitoranalysis #contentmarketing #n8nworkflow #workflow #nocode #seoresearch #keywordmonitoring #searchtrends #digitalmarketing #keywordtracking #contentautomation #marketresearch #trendingkeywords #keywordanalysis #seoautomation #keyworddiscovery #searchmarketing #keyworddata #contentplanning #seotools #keywordscraping #searchinsights #markettrends #keywordstrategy