by Tharwat Mohamed
🚀 AI Resume Screener (n8n Workflow Template) An AI-powered resume screening system that automatically evaluates applicants from a simple web form and gives you clear, job-specific scoring — no manual filtering needed. ⚡ What the workflow does 📄 Accepts CV uploads via a web form (PDF) 🧠 Extracts key info using AI (education, skills, job history, city, birthdate, phone) 🎯 Dynamically matches the candidate to job role criteria stored in Google Sheets 📝 Generates an HR-style evaluation and a numeric score (1–10) 📥 Saves the result in a Google Sheet and uploads the original CV to Google Drive 💡 Why you’ll love it FeatureBenefitAI scoringInstantly ranks candidate fit without reading every CVGoogle Sheet-drivenEasily update job profiles — no code changesFast setupConnect your accounts and you're live in ~15 minsScalableWorks for any department, team, or organizationDeveloper-friendlyExtend with Slack alerts, translations, or automations 🧰 Requirements 🔑 OpenAI or Google Gemini API Key 📄 Google Sheet with 2 columns: Role, Profile Wanted ☁️ Google Drive account 🌐 n8n account (self-hosted or cloud) 🛠 Setup in 5 Steps Import the workflow into n8n Connect Google Sheets, Drive, and OpenAI or Gemini Add your job roles and descriptions in Google Sheets Publish the form and test with a sample CV Watch candidate profiles and scores populate automatically 🤝 Want help setting it up? Includes free setup guidance by the creator — available by email or WhatsApp after purchase. I’m happy to assist you in customizing or deploying this workflow for your team. 📧 Email: tharwat.elsayed2000@gmail.com 💬 WhatsApp: +20106 180 3236
by Juan Carlos Cavero Gracia
Description This automation template is designed for content creators, social media managers, and influencers who want to streamline their video publishing workflow. It automatically detects new videos uploaded to a specific Google Drive folder, generates AI-powered descriptions based on video audio content, and simultaneously publishes them across Instagram, TikTok, and YouTube while tracking everything in Airtable. Note: This workflow uses upload-post.com API (free trial no credit card required) for multi-platform video distribution and requires API tokens for each service. The AI-generated descriptions are created using OpenAI's transcription and chat models to analyze video audio content.* Who Is This For? Content Creators & Influencers:** Automatically publish your videos across all major social platforms without manual work. Social Media Managers:** Maintain consistent posting schedules across multiple platforms with AI-generated, platform-optimized descriptions. Marketing Teams:** Scale video content distribution with automated workflows that include tracking and status monitoring. Video Producers:** Focus on creating content while the system handles the tedious task of multi-platform publishing and description generation. What Problem Does This Workflow Solve? Publishing the same video content across Instagram, TikTok, and YouTube is time-consuming and repetitive. You need to manually upload each video, write unique descriptions, and track publication status. This workflow addresses these challenges by: Automated Video Distribution:** Detects new videos in Google Drive and automatically uploads them to all three platforms simultaneously. AI-Powered Content Generation:** Uses OpenAI to transcribe video audio and generate engaging, platform-appropriate descriptions automatically. Centralized Tracking:** Maintains detailed records in Airtable including upload status, URLs, and metadata for each platform. Error Monitoring:** Provides real-time error notifications via Telegram to ensure you're always aware of any issues. How It Works Video Upload Detection: The workflow monitors a specific Google Drive folder for new video uploads using automated triggers. Content Analysis: Downloads the video, extracts audio, and uses OpenAI to transcribe and generate compelling descriptions. Airtable Integration: Creates and updates records to track video metadata, descriptions, and publication status. Multi-Platform Publishing: Simultaneously uploads the video to Instagram, TikTok, and YouTube using the upload-post.com API. Status Tracking: Updates Airtable records with publication status and platform-specific URLs for each successful upload. Setup Google Drive Configuration: Set up the Google Drive trigger to monitor your specific folder Configure OAuth2 credentials for Google Drive access OpenAI Integration: Add your OpenAI API key to enable audio transcription and description generation Airtable Setup: Create an Airtable base with fields for Video Name, Description, Platform Status, URLs, and Upload Date Add your Airtable API token and configure base/table IDs in the "Set Variables" node Upload-Post.com Account: Create an account at upload-post.com to get your API token Configure the token in the HTTP request nodes for each platform Set your user ID in the variables section Platform Accounts: Ensure your Instagram, TikTok, and YouTube accounts are connected to upload-post.com Error Notifications: (Optional) Configure Telegram bot credentials for error notifications Requirements Accounts:** Google Drive, OpenAI, Airtable, upload-post.com, Telegram (optional) API Keys & Credentials:** Google Drive OAuth2, OpenAI API Key, Airtable API Token, upload-post.com API Token Platform Setup:** Instagram, TikTok, and YouTube accounts connected to upload-post.com Transform your video publishing workflow from hours of manual work to a fully automated system that handles everything from content analysis to multi-platform distribution and tracking.
by ScrapeOps
Amazon Product Price Tracker This workflow automatically monitors Amazon product prices, tracks price changes, and sends alerts when significant price fluctuations occur. Built with ScrapeOps' structured data API, it provides a reliable, maintenance-free solution for price tracking without worrying about anti-bot measures or complex selectors. What This Workflow Does Monitors multiple Amazon products simultaneously using their ASINs Calculates both absolute and percentage price changes Sends customizable email alerts when prices cross defined thresholds Maintains a historical record of all price data for trend analysis Updates a Google Sheets with the latest price information Prerequisites A ScrapeOps API key (register at https://scrapeops.io) Google account for Google Sheets integration SMTP email configuration for alerts Setup Instructions Spreadsheet Setup Make a copy of the template spreadsheet: https://docs.google.com/spreadsheets/d/1hRv-TBXrpN6rkIU65WorttNHt-IPWas_An0sF4Of39U Add your Amazon product ASINs in the "Products to Monitor" sheet Set your desired alert thresholds for price increases/decreases Workflow Configuration Add your ScrapeOps API key to the "Setup" node Update the spreadsheet URL in the "Setup" node with YOUR copy Configure your email settings for notifications Adjust the schedule frequency as needed (default: hourly) How It Works The workflow reads product ASINs from your Google Sheet, fetches current pricing data via ScrapeOps' Amazon Product API, calculates price changes, updates your spreadsheet, and sends alerts when price movements exceed your defined thresholds. Unlike traditional web scrapers that break when websites change, this solution uses ScrapeOps' reliable API that handles all the complexity of Amazon data extraction, ensuring consistent results without maintenance. Additional Notes This workflow is ideal for deal hunters, price comparison services, and e-commerce analytics The alerting system can be extended to additional channels like Slack or Telegram ScrapeOps handles all anti-bot measures, proxy management, and parsing complexities
by Muhammad Ashar
How It Works – Your AI Marketing Team in Action This automation acts as your AI-powered content and image marketing assistant inside Telegram. With just a voice note or text message, it can: 🧠 Understand your request – Whether you send a message or speak into Telegram, it transcribes and processes your input using GPT-4. 🎨 Create and edit content – Based on what you say, it can generate: ✍️ Blog posts 💼 LinkedIn posts 🎬 Faceless videos 🖼️ AI-generated images 🪄 Edits to existing images 🔎 Searches through your image database 💬 Replies directly in Telegram – It sends you back the result—whether that’s a post, image, or video link—without leaving the app. 🧩 Built using LangChain agent logic – It intelligently chooses the right tool from a suite of sub-workflows like "Create Image", "Blog Post", or "Video" using agent reasoning. 🛠️ Setup Steps – Get Started in Minutes! ⌛ Time Estimate: ~15–30 minutes (faster if you're familiar with n8n) 🔗 1. Import the Template Pack 📥 Download and install these workflows into your n8n: Create Image, Edit Image, Search Images Blog Post, LinkedIn Post, Video 🔐 2. Add Required Credentials Telegram Bot 🤖 OpenRouter AI 🧠 Tavily API (for smart research) 📚 ElevenLabs 🎙️ (for voice in videos) PiAPI & Runway 🎞️ (for faceless videos) 🧩 3. Link the Tools to the Agent Node – Make sure the "Marketing Team Agent" is connected to each of the content creation tools as shown in the workflow. 📎 4. Download Templates & Logs 🧾 Google Sheets Log Template (to track output) 🖼️ Creatomate Template (optional for enhanced image control – shared in Skool group) 📌 Pro Tip: All detailed step-by-step setup instructions are included as sticky notes inside the n8n canvas. Just follow along!
by Saswat Saubhagya Rout
📝 Use Case This n8n workflow automates the creation and publication of technical blog posts based on a list of topics stored in Google Sheets. It fetches context using Tavily and Wikipedia, generates Markdown-formatted content with Gemini AI, commits it to a GitHub repository, and updates a Jekyll-powered blog — all without manual intervention. Ideal for developers, bloggers, or content teams who want to streamline technical content creation and publishing. ⚙️ Setup Instructions 🔑 Prerequisites n8n (cloud or self-hosted) Tavily API key Google Sheets with blog topics Gemini (Google Palm) API key GitHub repository (Jekyll enabled) GitHub OAuth2 credentials Google OAuth2 credentials 🧩 Setup Steps Import the workflow JSON into your n8n instance. Set up the following credentials in n8n: Tavily API Google Sheets OAuth2 Google Palm/Gemini AI GitHub OAuth2 Prepare your Google Sheet: Columns: Title, status, row_number Set status to blank for topics to be picked up. Configure: GitHub repo and _posts/ path Jekyll setup (front matter, _config.yml, GitHub Pages) Adjust prompt/custom parameters if needed. Enable and deploy the workflow. Schedule it daily or trigger manually. 🔄 Workflow Details | Node | Function | |------|----------| | Schedule Trigger | Triggers the flow at a set interval | | Google Sheets (Get Topic) | Fetches the next incomplete blog topic | | Extract Topic | Parses topic text from the sheet | | Tavily Search | Gathers up-to-date content related to the topic | | Wikipedia Tool | Optionally adds more context or images | | Summarize Results | Formats the context for the AI | | Gemini AI Agent (LangChain) | Generates a Markdown blog post with YAML front matter | | Set File Parameters | Prepares the filename, content, and commit message | | GitHub Commit | Uploads the .md file to the _posts/ directory | | Update Google Sheet | Marks topic as done after successful commit | 🛠️ Customization Options Change LLM prompt (e.g. tone, depth, format). Use OpenAI instead of Gemini by switching nodes. Modify filename pattern or GitHub repo path. Add Slack/Discord notifications after publish. Extend flow to upload images or embed YouTube links. ⚠️ Community Nodes Used This workflow uses the following community nodes: @tavily/n8n-nodes-tavily.tavily – for deep search > ⚠️ Ensure these are installed and enabled in your n8n instance. 💡 Pro Tips Use GitHub Actions to trigger an automatic Jekyll build post-commit. Structure blog posts with front matter, headings, and table of contents for SEO. Set Schedule Trigger to daily at a fixed time to keep content flowing. Enhance formatting in AI output using code blocks, images, and lists. ✅ Example Output title: "How LLMs Are Changing Web Development" date: "2025-07-25" categories: [webdev, AI] tags: [LLM, Gemini, n8n, automation] excerpt: "Learn how LLMs like Gemini are transforming how we generate and deploy developer content." author: "Saswat Saubhagya" Table of Contents Introduction Understanding LLMs Use Cases in Web Development Challenges Conclusion ...
by Don Jayamaha Jr
📊 This AI sub-agent aggregates Tesla (TSLA) trading signals across multiple timeframes using real-time technical indicators and candlestick behavior. It is a core component of the Tesla Quant Trading AI system. Powered by GPT-4.1, it consolidates 15-minute, 1-hour, and 1-day indicators, adds candlestick pattern data, and produces a unified JSON signal for downstream use by the master agent. ⚠️ This agent is not standalone. It is triggered by the Tesla Quant Trading AI Agent via Execute Workflow. 🧠 Requires: 4 connected sub-agents and Alpha Vantage Premium API Key 🔌 Required Sub-Workflows To use this workflow, you must install: Tesla 15min Indicators Tool Tesla 1hour Indicators Tool Tesla 1day Indicators Tool Tesla 1hour and 1day Klines Tool Tesla Quant Technical Indicators Webhooks Tool (provides Alpha Vantage data) 🧠 What This Agent Does Fetches pre-cleaned 20-point JSON outputs from the 4 sub-agents listed above Analyzes each timeframe individually: 15m: momentum and short-term setups 1h: confirmation of emerging trends 1d: macro positioning and trend alignment Klines: candlestick reversal patterns and volume divergence Generates a structured final signal in JSON with: Trading stance: Buy, Sell, Hold, or Cautious Confidence score (0.0–1.0) Multi-timeframe indicator breakdown Candlestick and volume divergence annotations 📋 Sample Output { "summary": "TSLA momentum is weakening short-term. 1h MACD shows bearish crossover, RSI declining. 1d candles confirm potential reversal setup.", "signal": "Cautious Sell", "confidence": 0.81, "multiTimeframeInsights": { "15m": { "RSI": 68.3, "MACD": { "macd": 0.53, "signal": 0.61 }, ... }, "1h": { "RSI": 65.0, "MACD": { "macd": -0.32, "signal": 0.11 }, ... }, "1d": { "BBANDS": { ... }, ... }, "candlestickPatterns": { "1h": "Doji", "1d": "Bearish Engulfing" }, "volumeDivergence": { "1h": "Bearish", "1d": "Neutral" } } } 🛠️ Setup Instructions Import this workflow into n8n Name it: Tesla_Financial_Market_Data_Analyst_Tool Add Required API Credentials Alpha Vantage Premium (via HTTP Query Auth) OpenAI GPT-4.1 for reasoning and synthesis Link Required Sub-Agents Connect the 4 tool workflows listed above to their respective Tool Workflow nodes Connect the webhook provider for data fetches Set Up as Sub-Agent This workflow must be triggered using Execute Workflow from the parent agent Pass in: message (optional context) sessionId (used for memory continuity) 🧾 Sticky Notes Provided 📘 Tesla Financial Market Data Analyst — Core logic overview 📈 15m / 1h / 1d Tool Notes — Indicator lists + use cases 🕯️ Klines Tool Note — Candlestick and volume divergence patterns 🧠 GPT Reasoning Note — GPT-4.1 handles final synthesis 🧩 Sub-Workflow Trigger — Proper integration with parent agent 🧠 Memory Buffer — Maintains session context across evaluations 🔒 Licensing & Support © 2025 Treasurium Capital Limited Company The logic, prompt design, and multi-agent architecture are proprietary and IP-protected. For support or collaboration inquiries: 🔗 Don Jayamaha – LinkedIn 🔗 n8n Creator Profile 🚀 Unify your Tesla trading logic across timeframes—automated, AI-powered, and built for scalers and swing traders.
by Ficky
Build a Redis-Powered CRUD App with HTML Frontend This workflow demonstrates how to use n8n to build a complete, self-contained CRUD (Create, Read, Update, Delete) application without relying on any external server or hosting. It not only acts as the backend, handling all CRUD operations through Webhook endpoints, but also serves a fully functional HTML Single Page Application (SPA) directly via a webhook response. Redis is used as a lightweight data store, providing fast and simple key-value storage with auto-incremented IDs. Because both the frontend (HTML app) and backend (API endpoints) are managed entirely within a single n8n workflow, you can quickly prototype or deploy small tools without additional infrastructure. This approach is ideal for: Rapidly creating no-code or low-code applications Running fully browser-based tools served directly from n8n Teaching or demonstrating n8n + Redis integration in a single workflow Features Add new item with auto-incremented ID Edit existing item Delete specific item Reset all data (clear storage and reset autoincrement id) Single HTML frontend for demonstration (no framework required) Setup Instructions 1. Prerequisites Before importing and running the workflow, make sure you have: A running n8n instance (self-hosted or cloud) A running Redis server (local or remote) 2. API Path Setup For the REST API, use a consistent path. For example, if you choose items as the path: 2a. Get All Items** Method: GET Endpoint: items 2b. Add Item** Method: POST Endpoint: items 2c. Edit Item** Method: PUT Endpoint: items 2d. Delete Item** Method: DELETE Endpoint: items 2e. Reset Items** Method: POST Endpoint: items-reset 3. Configure the API URL Set the API URL in the SET API URL node. Use your n8n webhook URL, for example: https://yourn8n.com/webhook/items 4. Run the HTML App Once everything is set: Open the webhook URL for the HTML app in a browser. The CRUD interface will load and connect to the API endpoints automatically. You can now add, edit, delete, or reset items directly from the web interface. Workflows 1. Render the HTML CRUD App This webhook serves a self-contained HTML Single Page Application (SPA) for basic CRUD operations. The HTML content is returned directly in the webhook response. This setup is ideal for lightweight, browser-based tools without external hosting. How to Use Open the webhook URL in a browser The CRUD interface will load and connect to the data source via API calls Before using, make sure to edit the api_url in the SET API URL node to match your webhook endpoint 2a. REST API: Get All Items This webhook handles retrieving all saved items from Redis. Each item is returned with its corresponding ID and associated data (e.g., name). This endpoint is used by the HTML CRUD App to display the full list of items. Method**: GET Function**: Fetches all items stored in Redis and returns them as a JSON array 2b. REST API: Add Item This webhook handles the Add Item functionality. This endpoint is typically called by the HTML CRUD App when adding a new item. Method**: POST Request Body**: { "name": "item name" } Function**: Generates an auto-incremented ID using Redis and saves the data under that ID 2c. REST API: Edit Item This webhook handles updating an existing item in Redis. Method**: PUT Request Body**: { "id": 1, "name": "Updated Item Name" } Function**: Finds the item by the given id and updates its data in Redis 2d. REST API: Delete Item This webhook handles deleting a specific item from Redis. Method**: DELETE Request Body**: { "id": 1 } Function**: Removes the item with the given id from Redis 2e. REST API: Reset Items This webhook handles resetting all data in the application. Method**: POST Function**: Deletes all stored items from Redis Resets the auto-increment ID by deleting the data in Redis
by Davide
This workflow combines OpenAI, Retrieval-Augmented Generation (RAG), and WooCommerce to create an intelligent personal shopping assistant. It handles two scenarios: Product Search: Extracts user intent (keywords, price ranges, SKUs) and fetches matching products from WooCommerce. General Inquiries: Answers store-related questions (e.g., opening hours, policies) using RAG and documents stored in Google Drive. How It Works 1. Chat Interaction & Intent Detection Chat Trigger**: Starts when a user sends a message ("When chat message received"). Information Extractor**: Uses OpenAI to analyze the message and determine if the user is searching for a product or asking a general question. Extracts: search (true/false). keyword, priceRange, SKU, category (if product-related). Example: { "search": true, "keyword": "red handbags", "priceRange": { "min": 50, "max": 100 }, "SKU": "BAG123", "category": "women's accessories" } 2. Product Search (WooCommerce Integration) AI Agent**: If search: true, routes the request to the personal_shopper tool. WooCommerce Node: Queries the WooCommerce store using extracted parameters (keyword, priceRange, SKU). Filters products in stock (stockStatus: "instock"). Returns matching products (e.g., "red handbags under €100"). 3. General Inquiries (RAG System) RAG Tool**: If search: false, uses the Qdrant Vector Store to retrieve store information from documents. Google Drive Integration: Documents (e.g., store policies, FAQs) are stored in Google Drive. Downloaded, split into chunks, and embedded into Qdrant for semantic search. OpenAI Chat Model: Generates answers based on retrieved documents (e.g., "Our store opens at 9 AM"). Set Up Steps 1. Configure the RAG System Google Drive Setup**: Upload store documents . Update the Google Drive2 node with your folder ID. Qdrant Vector Database**: Clean the collection (update Qdrant Vector Store node with your URL). Use Embeddings OpenAI to convert documents into vectors. 2. Configure OpenAI & WooCommerce OpenAI Credentials**: Add your API key to all OpenAI nodes (OpenAI Chat Model, Embeddings OpenAI, etc.). WooCommerce Integration**: Connect your WooCommerce store (credentials in the personal_shopper node). Ensure product data is synced and accessible. 3. Customize the AI Agent Intent Detection**: Modify the Information Extractor’s system prompt to align with your store’s terminology. RAG Responses**: Update the tool description to reflect your store’s documents. Notes This template is ideal for e-commerce businesses needing a hybrid assistant for product discovery and customer support. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Edoardo Guzzi
Simple Social: Instagram Single Image Post with Facebook API Who is this workflow for? This workflow is designed for businesses, social media managers, content creators, and developers who need to automate the process of posting single images to Instagram using the Facebook API. It is ideal for anyone looking to streamline their social media posting process, saving time and ensuring consistent content delivery. Use Case / Problem Solved Manually posting images and captions on Instagram can be time-consuming, especially for businesses and content creators managing multiple accounts. This workflow automates the process from image preparation to publishing, reducing manual effort and increasing efficiency. What this workflow does Trigger Initialization: The workflow starts with a manual trigger that can be adapted to other triggers (e.g., HTTP webhook or schedule). Set Parameters: The workflow includes a node that sets essential parameters, such as the image URL, Instagram business account ID, and caption. Prepare Instagram Media: A node prepares the media for upload using the Facebook API, sending the image and caption for pre-publication processing. Check Media Upload Status: The workflow verifies if the media preparation is complete. Conditional Check: If the media preparation is successful, the workflow proceeds to publish; otherwise, it triggers an error-handling path. Publish Media: The media is published on Instagram if the conditions are met. Post-Publish Check: The workflow checks the status after publication. Conditional Check for Publication: If the publication status is "PUBLISHED," it triggers a success path; otherwise, it triggers a failure handling. Email Notifications: The workflow sends email notifications to indicate successful or unsuccessful outcomes. Setup Here is a quick video in italian language with sub eng(https://youtu.be/obWJFJvg_6g) Add API Credentials: Ensure that valid Facebook API credentials are added and configured for use. Permissions Required: Ensure your app has the necessary permissions (ads_management, business_management, instagram_basic, instagram_content_publish, pages_read_engagement). App review may be required for external user access. Node Configuration: Customize the Set Instagram Parameters node to specify the image URL, caption, and Instagram business account ID. Trigger Adaptation: Adapt the initial trigger if needed to fit your workflow's requirements (e.g., schedule, webhook). How to customize this workflow Change the Image URL and Caption**: Modify the Set Instagram Parameters node to change the image and caption. Trigger Customization**: Replace the manual trigger with other triggers like a webhook to automate posting based on external events. Notifications**: Adjust the email nodes to send customized messages or trigger other workflows based on the outcome. Limitations Image Format**: Only JPEG images are supported. Extended JPEG formats such as MPO and JPS are not compatible. Unsupported Tags**: Shopping tags, branded content tags, and filters are not supported. Instagram TV**: Publishing to Instagram TV is not supported. Rate Limit**: Instagram accounts are limited to 50 API-published posts within a rolling 24-hour period. Carousels count as a single post. Check usage with GET /{ig-user-id}/content_publishing_limit. Example Usage Imagine managing a business account that needs consistent posts. You can schedule this workflow or trigger it manually to automatically post images with captions at the right time, ensuring that your audience stays engaged without manual posting efforts.
by Yaron Been
🔍 Scrape Glassdoor with Bright Data Designed for sales teams, recruiters, and marketers aiming to automate job discovery and prospecting. This workflow scrapes Glassdoor job listings using Bright Data and automatically generates targeted pitches using AI, streamlining lead identification and outreach. 🧩 How It Works This automation leverages n8n, Bright Data, Google Sheets, and OpenAI: 1. Trigger Starts with a custom form input (Location, Keyword, Country). 2. Bright Data Job Scrape Triggers a Bright Data dataset snapshot via HTTP Request. Polls snapshot progress using a Wait node, ensuring data readiness. Retrieves full job listings dataset once ready. 3. Google Sheets Integration Writes detailed job data (company, role, location, overview, metrics) into a Google Sheet. Uses a pre-built template for organized data storage. 4. Automated Pitch Generation (AI) Splits listings into actionable parts: company name, title, and description. Sends data to OpenAI (via LangChain) to generate relevant pitches or icebreakers. Saves generated content back into the same sheet for easy access. ✅ Requirements Ensure you have the following: Google Sheets Google account Template Sheet with columns for job details and AI-generated pitches Bright Data Active account with Dataset API access API key and dataset ID OpenAI Valid OpenAI API key for GPT models n8n Environment Nodes: HTTP Request, Wait, If, Google Sheets, Split Out, LangChain (OpenAI) Credentials: Google Sheets OAuth2 Bright Data API credentials OpenAI API key ⚙️ Setup Instructions Step 1: Prepare Google Sheets Copy the provided Google Sheets template Do not change headers Step 2: Import & Configure Workflow in n8n Import the workflow JSON file Set Google Sheets node: Link to your copied sheet Confirm correct tab name Step 3: Configure Bright Data Replace <YOUR_BRIGHT_DATA_API_KEY> with your real key Set your dataset ID in all HTTP Request nodes Step 4: Configure OpenAI (LangChain) Connect OpenAI API key to the LangChain node Customize prompt to match tone and outreach style Step 5: Testing & Scheduling Test via manual form trigger Schedule runs or leave form enabled for on-demand use 🧠 Tips & Best Practices Use specific keywords and locations for better results Adjust polling intervals based on dataset size Refine AI prompts regularly to improve pitch quality Clean unused columns from your sheet to boost performance 💬 Support & Feedback For help or customization: 📧 Email: Yaron@nofluff.online 📺 YouTube: @YaronBeen 🔗 LinkedIn: linkedin.com/in/yaronbeen 📚 Bright Data Docs: docs.brightdata.com/introduction
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Legal Case Research Extractor is a powerful automated workflow designed for legal tech teams, researchers, law firms, and data scientists focused on transforming unstructured legal case data into actionable, structured insights. This workflow is tailored for: Legal Researchers automating case law data mining Litigation Support Teams handling large volumes of case records LawTech Startups building AI-powered legal research assistants Compliance Analysts extracting case-specific insights AI Developers working on legal NLP, summarization, and search engines What problem is this workflow solving? Legal case data is often locked in semi-structured or raw HTML formats, scattered across jurisdiction-specific websites. Manually extracting and processing this data is tedious and inefficient. This workflow automates: Extraction of legal case data via Bright Data's powerful MCP infrastructure Parsing of HTML into clean, readable text using Google Gemini LLM Structuring and delivering the output through webhook and file storage What this workflow does Input Set the Legal Case Research URL node is responsible for setting the legal case URL for the data extraction. Bright Data MCP Data Extractor Bright Data MCP Client For Legal Case Research node is responsible for the legal case extraction via the Bright Data MCP tool - scrape_as_html Case Extractor Google Gemini based Case Extractor is responsible for producing a paginated list of cases Loop through Legal Case URLs Receives a collection of legal case links to process Each URL represents a different case from a target legal website Bright Data MCP Scraping Utilizes Bright Data’s scrape_as_html MCP mode Retrieves raw HTML content of each legal case Google Gemini LLM Extraction Transforms raw HTML into clean, structured text Performs additional information extraction if required (e.g., case summary, court, jurisdiction etc.) Webhook Notification Sends extracted legal case content to a configurable webhook URL Enables downstream processing or storage in legal databases Binary Conversion & File Persistence Converts the structured text to binary format Saves the final response to disk for archival or further processing Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Target New Legal Portals Modify the legal case input URLs to scrape from different state or federal case databases Customize LLM Extraction Modify the prompt to extract specific fields: case number, plaintiff, case summary, outcome, legal precedents etc. Add a summarization step if needed Enhance Loop Handling Integrate with a Google Sheet or API to dynamically fetch case URLs Add error handling logic to skip failed cases and log them Improve Security & Compliance Redact sensitive information before sending via webhook Store processed case data in encrypted cloud storage Output Formats Save as PDF, JSON, or Markdown Enable output to cloud storage (S3, Google Drive) or legal document management systems
by Mohammad Ghaffarifar
This template creates a Telegram AI Assistant that answers questions based on your documents, powered by Google Gemini and Supabase. Key features include Intelligent HTML Post-processing for rich formatting in Telegram and Adaptive Message Chunking to handle long text responses. 📹 Watch the Bot in Action ▶️ Click the image above to watch a live demo on YouTube. This video provides a live demonstration of the bot's core features and how it interacts. See a quick walkthrough of its capabilities and user flow. How it works: User uploads a PDF document to a Telegram bot. The workflow processes the PDF, creates embeddings using Google Gemini, and stores these embeddings in a Supabase vector table. Users then ask questions to the bot. The workflow performs a vector search in Supabase to find relevant document chunks based on the user's query. Google Gemini uses the retrieved relevant chunks to generate an intelligent answer. The bot sends the formatted answer back to the user on Telegram, utilizing HTML markup for enhanced presentation. Set up steps: Setup should take approximately 15-20 minutes. Import the workflow into your n8n instance. Configure credentials for Telegram, Google Gemini, and Supabase. Set up your Supabase vector table using the provided SQL script. Activate the workflow. Detailed setup instructions, including how to get API keys and configure nodes, are available in the sticky notes within the workflow itself.