by Alex Kim
Automatically convert documents from Google Drive into vector embeddings using OpenAI, LangChain, and PGVector — fully automated through n8n. ⚙️ What It Does This workflow monitors a Google Drive folder for new files, supports multiple file types (PDF, TXT, JSON), and processes them into vector embeddings using OpenAI’s text-embedding-3-small model. These embeddings are stored in a Postgres database using the PGVector extension, making them query-ready for semantic search or RAG-based AI agents. After successful processing, files are moved to a separate “vectorized” folder to avoid duplication. 💡 Use Cases Powering Retrieval-Augmented Generation (RAG) AI agents Semantic search across private documents AI assistant knowledge ingestion Automated document pipelines for indexing or classification 🧠 Workflow Highlights Trigger Options:** Manual or Scheduled (3 AM daily by default) Supported File Types:** PDF, TXT, JSON Embedding Stack:** LangChain Text Splitter, OpenAI Embeddings, PGVector Deduplication:** Files are moved after processing License:** CC BY-SA 4.0 Author:** AlexK1919 🛠 What You’ll Need Google Drive OAuth2** credentials (connected to Search Folder, Download File, and Move File nodes) OpenAI API Key** (used in the Embeddings OpenAI node) Postgres + PGVector** database (connected in the Postgres PGVector Store node) 🔧 Step-by-Step Setup Instructions Create Google OAuth2 credentials in n8n and connect them to all Google Drive nodes. Set your source folder ID in the Search Folder node — this is where incoming files are placed. Set your processed folder ID in the Move File node — files will be moved here after vectorization. Ensure you have a PGVector-enabled Postgres instance and input the table name and collection in the Postgres PGVector Store node. Add your OpenAI credentials to the Embeddings OpenAI node and select text-embedding-3-small. Optional: Activate the Schedule Trigger node to run daily or configure your own schedule. Run manually by triggering When clicking ‘Test workflow’ for on-demand ingestion. 🧩 Customization Tips Want to support more file types or enhance the pipeline? Add new extractors**: Use Extract from File with other formats like DOCX, Markdown, or HTML. Refine logic by file type**: The Switch node routes files to the correct extraction method based on MIME type (application/pdf, text/plain, application/json). Pre-process with OCR**: Add an OCR step before extraction to handle scanned PDFs or images. Add filters**: Enhance the Search Folder or Switch node logic to skip specific files or folders. 📄 License This workflow is available under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. You are free to use, adapt, and share this workflow for non-commercial purposes under the terms of this license. Full license details: https://creativecommons.org/licenses/by-nc-sa/4.0/
by Amjid Ali
Automate Digital Delivery After PayPal Purchase Using n8n A Complete Step-by-Step Guide to Seamless Template Delivery Built by Amjid Ali – SyncBricks Deliver personalized files instantly after PayPal transactions using n8n – without writing a single backend line. 🚀 What This n8n Workflow Does This automation template helps you automatically deliver a digital product (such as an n8n template or JSON file) to customers who pay via PayPal — within seconds. You can: Automatically extract customer info Identify what was purchased Send a clean, branded email with the product file Promote your other courses, books, and tools 📦 Use Case Example Product: AI-Powered Social Media Content Generator & Publisher When a customer buys this product through PayPal, this automation: Listens for a successful payment event Fetches order details via API Sends an HTML email with the template attached Promotes your other offerings with embedded links 🔧 Prerequisites You’ll need: An n8n instance (self-hosted or n8n Cloud) A PayPal developer account PayPal OAuth2 credentials configured in n8n Your product hosted as a downloadable .json file (Oracle, Dropbox, GitHub, etc.) SMTP email credentials in n8n 🧠 Step-by-Step Setup 1. Webhook Trigger Node: Webhook Listens for a POST request from PayPal’s webhook for PAYMENT.CAPTURE.COMPLETED events. 📌 Add the webhook to your PayPal Developer App > Webhooks. 2. Wait Node: Wait Adds a brief delay to ensure the payment is completely processed before continuing. 3. Filter Event Type Node: Switch Processes only when the event is PAYMENT.CAPTURE.COMPLETED. 4. Fetch Order Details Node: HTTP Request Retrieves the order information from PayPal's Orders API. URL format: https://api.paypal.com/v2/checkout/orders/{{ order_id }} 5. Extract Email & Product Info Node: Set Extracts first name, last name, email address, and the purchased item name. 6. Identify Product Purchased Node: Switch Checks if the product is “AI-Powered Social Media Content Generator & Publisher”. 7. Download Workflow File Node: HTTP Request Fetches the hosted workflow JSON from object storage (Oracle in this case). 8. Convert to Downloadable File Node: Code Converts the JSON content into a binary file and attaches it. 9. Send Custom Email Node: Send Email Sends a rich HTML email to the buyer with: Their name The file attachment Product name Helpful resource links: 📘 Mastering n8n Course on Udemy 📖 Step-by-Step Guide (n8n Book) 🎓 n8n Video Tutorials (Free Course) ☁️ Sign up for n8n Cloud – Use code AMJID10 🎥 YouTube Video Walkthrough 📚 Additional Learning Resources 🚀 My Full Automation Suite Explore more and master n8n with these resources: 🎓 Mastering n8n (Full Udemy Course) 📕 Get Your Step-by-Step Guide (n8n Book) 🎥 Get Step-by-Step Tutorials (Video Course) ☁️ Sign up for n8n Cloud 💡 Templates, Tools, and More 📺 YouTube Channel – SyncBricks 🙋 Need Help or Customization? Reach out! Email: amjid@amjidali.com LinkedIn: linkedin.com/in/amjidali Website: syncbricks.com
by Xavier
This workflow creates nested Google Drive folders from a path string (like Projects/Clients/Reports). It automatically handles the necessary folder lookups and creation steps required by Google Drive, then outputs the final folder's ID for immediate use. How it works This workflow streamlines the creation of nested folders in Google Drive: Input: Provide a root_folder_id and a path (e.g., Projects/Clients/Reports) as input. Path Parsing: The workflow splits the path into individual folder names (based on the / separator) Iterative Check & Create: Loops through each part of your path: Searches within the current parent folder (starting with the root_folder_id) for a subfolder matching the name. If found: Retrieves the existing folder's ID to use as the parent for the next iteration. If not found: Creates a new folder with that name inside the current parent folder and uses the new folder's ID as the parent for the next iteration. Output: Returns the Google Drive Folder ID of the very last folder in the specified path (e.g., the ID for Reports in the example above). This ID can then be directly used in subsequent n8n nodes to upload files, create documents, or perform other actions within that specific folder. Set up steps Setting up this workflow requires configuring the connection to Google Drive and knowing where to start creating folders: Connect Google Drive Account: Ensure you have a Google Drive credential configured in your n8n instance. Then link your credentials in the workflow: there are 2 Google Drive nodes that will need to be updated. Identify Starting Folder ID: Determine the Google Drive Folder ID where your nested structure should begin. You can either use the root of your Google Drive or a specific folder: To use the root of Google Drive, simply set root_folder_id to root (also called "My Drive" in the UI) To use a specific folder, open the folder in a webbrowser and look at the URL. The folder ID will be in the last part of the URL: https://drive.google.com/drive/folders/THIS_IS_THE_FOLDER_ID. Prepare Inputs for Execution: When running the workflow (or triggering it), you will need to provide: google_drive_folder_id -> this is the root folder ID you identified in step 2. desired_path -> This is the path you want to create (e.g., Projects/Clients/Reports). Here's an example of how you can call this workflow in your other workflows:
by Luciano Gutierrez
Instagram Auto-Comment Responder with AI Agent Integration Version: 1.1.0 ‧ n8n Version: 1.88.0+ ‧ License: MIT A fully automated workflow for managing and responding to Instagram comments using AI agents. Designed to improve engagement and save time, this system listens for new Instagram comments, verifies and filters them, fetches relevant post data, processes valid messages with a natural language AI, and posts context-aware replies directly on the original post. Key Features 💬 AI-Driven Engagement: Intelligent responses to comments via a GPT-powered agent. ✅ Webhook Verification: Handles Instagram webhook handshake to ensure secure integration. 📦 Data Extraction: Maps incoming payload fields (user ID, username, message text, media ID) for processing. 🚫 Self-Comment Filtering: Automatically skips comments made by the account owner to prevent loops. 📡 Post Data Retrieval: Fetches the media’s id and caption from the Graph API (v22.0) before generating a reply. 🧠 Natural Language Processing: Uses a custom system prompt to maintain brand tone and context. 🔁 Automated Replies: Posts the AI-generated message back to the comment thread using Instagram’s API. 🧩 Modular Architecture: Clear separation of steps via sticky notes and dedicated HTTP Request and Agent nodes. Use Cases Social Media Automation**: Keep followers engaged 24/7 with instant, relevant replies. Community Building**: Maintain a consistent voice and tone across all interactions. Brand Reputation Management**: Ensure no valid comment goes unanswered. AI Customer Support**: Triage simple questions and direct followers to resources or support. Technical Implementation Webhook Verification Node: Webhook + Respond to Webhook Echoes hub.challenge to confirm subscription and secure incoming events. Data Extraction Node: Set Maps payload fields into structured variables: conta.id, usuario.id, usuario.name, usuario.message.id, usuario.message.text, usuario.media.id, endpoint. User Validation Node: Filter Skips processing if conta.id equals usuario.id (self-comments). Post Data Retrieval Node: HTTP Request (Get post data) GET https://graph.instagram.com/v22.0/{{ $json.usuario.media.id }}?fields=id,caption&access_token={{ credentials }} Captures the media’s caption for richer context in replies. AI Response Generation Nodes: AI Agent + OpenRouter Chat Model Uses a detailed system prompt with: Profile persona (expert in AI & automations, friendly tone). Input data (username, comment text, post caption). Filtering logic (spam, praise, questions, vague comments). Returns either the reply text or [IGNORE] for irrelevant content. Posting the Reply Node: HTTP Request (Post comment) POST {{ $json.endpoint }}/{{ $json.usuario.message.id }}/replies with message={{ $json.output }} Sends the AI answer back under the original comment. Instructions for Setup Import Workflow In n8n > Workflows > Import from File, upload the provided .json template. Configure Credentials Instagram Graph API (Header Auth or FacebookGraphApi) with instagram_basic, instagram_manage_comments scopes. OpenRouter/OpenAI API key for AI agent. Customize System Prompt Edit the AI Agent’s prompt to adjust brand tone, language (Brazilian Portuguese), length, or emoji usage. Test & Activate Publish a test comment on an Instagram post. Verify each node’s execution, ensuring the webhook, filter, data extraction, HTTP requests, and AI Agent respond as expected. Extend & Monitor Add sentiment analysis or lead capture nodes as needed. Monitor execution logs for errors or rate-limit events. Tags Social Media • Instagram Automation • Webhook Verification • AI Agent • HTTP Request • Auto Reply • Community Management
by Ranjan Dailata
Who this is for? Extract & Summarize Indeed Company Info is an automated workflow that extracts the Indeed company profile information using Bright Data Web Unlocker, transform it using Google Gemini’s LLM, and forward the transformed response with the summary to a specified webhook for downstream use. This workflow is tailored for: Recruiters and HR teams looking to assess companies quickly during talent sourcing. Job seekers researching potential employers and needing summarized company insights. Market researchers and analysts monitoring competitor or industry players. What problem is this workflow solving? Searching and evaluating company profiles on Indeed manually can be time-consuming and inefficient, especially when dealing with large volumes of companies. Manually browsing, copying, and summarizing company descriptions, reviews, and ratings from Indeed hinders productivity and limits real-time insights. This workflow solves this by: Automating the extraction of company details from Indeed using Bright Data Web Unlocker. Summarizing the raw data using Google Gemini's language model for a quick, human-readable overview. Sending the transformed response with the summary to a chosen endpoint, like Slack, Notion, Airtable, or a custom webhook. What this workflow does This automated pipeline does the following: Scrape Indeed company profile pages (e.g., ratings, description, reviews) using Bright Data’s Web Unlocker. Transform the scraped content into structured JSON using n8n’s built-in tools. Summarize and extract meaningful insights using Google Gemini's large language model. Forward the summarized data to a specified webhook or app for real-time access, storage, or analysis. Forward the formatted response to a specified webhook or app for real-time access, storage, or analysis. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the search query, Bright Data zone by navigating to the Set Indeed Search Query node. Update the Webhook Notifier with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a company or a market researcher, entrepreneur, or data analyst. Here’s how you can adapt it to fit your specific use case: Changing the data source**: Replace the Indeed search input with other job or business listing platforms if needed (e.g., Glassdoor, Crunchbase) Refining the LLM prompt**: Tailor the Gemini prompt to transform or summarize the Indeed company information in a specific format. Routing the output to different destinations**: Send summaries or transformed response to Google Sheets, Airtable, or CRMs like HubSpot or Salesforce etc.
by Yang
👥 Who is this for? This workflow is ideal for virtual assistants, researchers, developers, automation specialists, and data analysts who need to regularly extract and organize structured product information (like books) from a website. It’s especially useful for those working with catalog-based websites who want to automate extraction and delivery of clean, sorted data. 🧩 What problem is this solving? Manually copying product listings like book titles and prices from a website into a spreadsheet is slow and repetitive. This automation solves that problem by scraping content using Dumpling AI, extracting the right data using CSS selectors, and formatting it into a clean CSV file that is sent to your email—all triggered automatically when a new URL is added to Google Sheets. ⚙️ What this workflow does This template automates an entire content scraping and delivery process: Watches a Google Sheet for new URLs Scrapes the HTML content of the given webpage using Dumpling AI Uses CSS selectors in the HTML node to extract each book from the page Splits the HTML array into individual items Extracts the book title and price from each HTML block Sorts the books in descending order based on price Converts the sorted data to a CSV file Sends the CSV via email using Gmail 🛠️ Setup Google Sheets Create a sheet titled something like URLs Add your product listing URLs (e.g., http://books.toscrape.com) Connect the Google Sheets trigger node to your sheet Ensure you have proper credentials connected Dumpling AI Create an account at Dumpling AI) - Generate your API key Set the HTTP Method to POST and pass the URL dynamically from the Google Sheet Use Header Auth to include your API key in the request header Make sure "cleaned": "True" is included in the body for optimized HTML output HTML Node The first HTML node extracts the main book container blocks using: .row > li The second HTML node parses out the individual fields: title: h3 > a (via the title attribute) price: .price_color Sort Node Sorts books by price in descending order Note: price is extracted as a string, ensure it's parsable if you plan to use numeric filtering later Convert to CSV The JSON data is passed into a Convert node and transformed into a CSV file Gmail Sends the CSV as an attachment to a designated email 🔄 How to customize this workflow Extract more data**: Add more CSS selectors in the second HTML node to pull fields like author, availability, or product links Switch destinations**: Replace Gmail with Slack, Google Drive, Dropbox, or another platform Adjust sorting**: Sort alphabetically or based on another extracted value Use a different source**: As long as the site structure is consistent, this can scrape any listing-like page Trigger differently**: Use a webhook, form submission, or schedule trigger instead of Google Sheets ⚠️ Dependencies and Notes This workflow uses Dumpling AI to perform the web scraping. This requires an API key and uses credits per request. The HTML node depends on valid CSS selectors. If the site layout changes, the selectors may need to be updated. Ensure you’re not scraping content from websites that prohibit automated scraping.
by GYANENDRA DWIVEDI
🚀 WhatsApp Automation Template Designed & Developed by Infridet Solutions Private Limited 🔧 Objective: Automate your lead nurturing and sales process from YouTube/Instagram → Landing Page → CRM → Email → WhatsApp → Sales → Deal Closure using tools like: 🌐 WordPress (Landing Page + Fluent Forms) 🧾 Google Sheets (Backup Log) 📩 FluentCRM (Lead Tagging + Email Sequences) 💬 Whinta.com (WhatsApp Messaging API) ⚙️ N8N (Workflow Automation Engine) 🧩 System Flow Overview: Lead Source: YouTube or Instagram CTA Landing Page: Built on WordPress with a story-driven design Form Capture: Fluent Forms with dynamic input fields Data Sync: Backup to Google Sheets Push lead to FluentCRM and tag as New Lead Email Sequence: Warm-up emails (1 to 5) Introduce offer or service WhatsApp Outreach: Send personalized message via Whinta Triggered 1 hour after form fill or last email Sales Follow-Up: Sales team handles replies manually CRM tag updated to Customer upon closing 📁 Folder Structure (Optional Git/Zip File): 📦 WhatsApp-Automation-Infridet/ │ ├── whatsapp-automation-n8n.json # N8N Flowchart Import File ├── email-templates.docx # Warm-up Email Scripts ├── whinta-api-integration.pdf # API Documentation ├── crm-tagging-notes.txt # CRM Tag Setup Details └── readme.md # This Instruction File 🛠️ Required Integrations & Setup ✅ Fluent Forms (WordPress) Embed form with Name, Email, Phone Enable webhook to N8N: /lead-capture ✅ Google Sheets Use n8n-nodes-base.googleSheets node Capture name, email, phone, source, timestamp ✅ FluentCRM REST API enabled Push contact and assign tag New Lead Setup Email Automation via tag trigger ✅ SMTP Email (Optional) Use Gmail SMTP or Brevo Trigger email on form submission ✅ Whinta.com (WhatsApp API) Send POST request Payload includes phone, message, sender_id Customize message with personalization 💬 Sample WhatsApp Message: Hey {{name}}, Gyan here from Account Craft 👋 I saw your form submission – would you like help in starting your YouTube journey this week? Let me know. I'm just one text away. ✅ 📧 Sample Email (Warmup Day 1): > Subject: Welcome to Account Craft 🚀 > Body: > Hi {{name}}, > > I’m Gyan from Account Craft. Thanks for joining us! > Here’s what’s coming next: exclusive videos, personalized tips, and real support to get your YouTube channel earning. > > Let’s go! > – Gyan 🔁 CRM Tag Updates: | Action | Tag Assigned | |-------------------|------------------| | On form fill | New Lead | | After WhatsApp | Engaged | | After sale closed | Customer | 📌 Final Output: Once completed, the system will: Log all leads into a database Automatically send emails and WhatsApp messages Notify your sales team Update lead status without manual entry > Automation Template Designed & Deployed by > Infridet Solutions Private Limited > Smart Integrations. Seamless Business. > 🌐 www.infridetsolutions.com | 📞 +91-8853354829
by Paulo Ramirez
Upload your CRM contacts to telli and schedule AI voice-agent calls Introduction to telli and AI Voice-Agent Calls telli is an innovative platform that provides AI-powered voice agents capable of making calls and performing tasks tailored to specific customer use cases. These AI voice-agents can handle a wide range of communication tasks, from appointment scheduling to customer support, with remarkable efficiency and natural conversation flow. This template is designed for businesses and organizations looking to automate their outbound calling processes using telli's AI voice-agents in conjunction with Airtable as their CRM. It solves the problem of manual call scheduling and data transfer between your CRM and calling system, saving time and reducing human error. Prerequisites telli account Airtable base with contact information n8n instance Step-by-Step Setup Guide n8n Setup: Create a new workflow in n8n. Add the Airtable node to connect to your CRM table. telli API Configuration: Log in to your telli dashboard. Locate and copy your API key under telli - Settings - API/Webhooks. Workflow Configuration: Add two HTTP Request nodes to your n8n workflow. Set the "Authorization" header in both POST requests, replacing the value with your telli API key. Configure the first request to use the /add-contact endpoint. Set up the second request to use the /schedule-call endpoint. Data Mapping: Map the relevant fields from your Airtable node to the telli API requests. Testing and Activation: Run a test execution of your workflow. Once satisfied with the results, activate the workflow. API Endpoint Details Add Contact Endpoint URL**: https://api.telli.com/v1/add-contact Method**: POST Headers**: Authorization: YOUR-API-KEY Content-Type: application/json Payload**: { "external_contact_id": "string", "salutation": "string", "first_name": "string", "last_name": "string", "phone_number": "string", "email": "jsmith@example.com", "contact_details": {}, "timezone": "string" } Schedule Call Endpoint URL**: https://api.telli.com/v1/schedule-call Method**: POST Headers**: Authorization: YOUR-API-KEY Content-Type: application/json Payload**: { "contact_id": TELLI-CONTACT-ID, "agent_id": "string", "max_retry_days": 123, "call_details": { "message": "Hello, this is your friendly reminder!", "questions": [ { "fieldName": "email", "neededInformation": "email of the customer", "exampleQuestion": "What is your email address?", "responseFormat": "email string" } ] }, "override_from_number": "string" } Use Cases This template is versatile and can be applied to various scenarios, including: Lead Qualification*: Automatically schedule calls to new leads entered in your CRM. Appointment Reminders*: Set up calls to remind clients of upcoming appointments. Customer Feedback*: Schedule follow-up calls after product deliveries or service completions. Uploading Multiple Contacts For bulk operations, you have two options: Loop Node: Include a Loop node in your n8n workflow to process multiple contacts sequentially. Batch Endpoints: Instead of /add-contact and /schedule-call, use telli's batch endpoints: /add-contacts-batch: Add multiple contacts within an array. /schedule-calls-batch: Schedule multiple calls at once. Example of batch endpoint usage: { "contacts": [ {"name": "John Doe", "phone": "+1234567890"}, {"name": "Jane Smith", "phone": "+1987654321"} ] } By leveraging this template, you can seamlessly integrate your Airtable CRM with telli's powerful AI voice-agents, automating your outbound calling process and enhancing your customer communication strategy.
by Mario
Dynamically switch between LLMs for AI Agents using LangChain Code Purpose This example workflow demonstrates a way to connect multiple LLMs to a single AI Agent/LangChain Node and programmatically use one – or in this case loop through them. What it does This AI workflow takes in customer complaints and generates a response that is being validated before returned. If the answer was not satisfactory, the response will be generated again with a more capable model. How it works A LangChain Code Node allows multiple LLMs to be connected to a single Basic LLM Chain. On every call only one LLM is actually being connected to the Basic LLM Chain, which is determined by the index defined in a previous Node. The AI output is later validated by a Sentiment Analysis Node If the result was not satisfactory, it loops back to the beginning and executes the same query with the next available LLM The loop ends either when the result passed the requirements or when all LLMs have been used before. Setup Clone the workflow and select the belonging credentials. You'll need an OpenAI Account, alternatively you can swap the LLM nodes with ones from a different provider like Anthropic after the import. How to use Beware that the order of the used LLMs is determined by the order they have been added to the workflow, not by the position on the canvas. After cloning this workflow into your environment, open the chat and send this example message: > I really love waiting two weeks just to get a keyboard that doesn’t even work. Great job. Any chance I could actually use the thing I paid for sometime this month? Most likely you will see that the first validation fails, causing it to loop back to the generation node and try again with the next available LLM. Since AI responses are unpredictable, the results and number of tries will differ for each run. Disclaimer Please note, that this workflow can only run on self-hosted n8n instances, since it requires the LangChain Code Node.
by Sean Lon
Target Audience You will find this workflow or template perfect if you are in the internal talent acquisition teams, recruitment agencies, HR professionals, and hiring managers seeking to bulk automate the initial screening of CVs and resumes. Eg. Automatically get result of candidate who has been shortlisted/rejected with its rationale and score automatically. By eliminating manual evaluation and screening, you get smart AI-Agent helping you to have standardized efficient, and scalable solution for handling large volumes of applications. With bulk automation, you can focus strategic decision-making rather than tedious screening tasks, ensuring a faster, more accurate, and fair hiring process. Key focus This workflow focusses on having a more organized file-folder management, trackable candidate cv, maintainable job description, autonomous ai-agent. Organized Folder-File Structure – CVs are automatically categorized based on their status, ensuring a structured workflow and easy retrieval Candidate Tracker – A real-time tracking system records the state of each CV, allowing recruiters to monitor the shortlisted, rejected, or KIV (Keep in View) candidates. AI Agent for Decision Automation – The AI autonomously orchestrates screening decisions, replacing manual LLM configurations with dynamic AI-driven evaluations for scalability and accuracy. Maintainable Job Description Management – A structured job description file ensures continuous updates, keeping hiring criteria flexible and aligned with recruitment needs. Email Notifications – The system automatically sends receipt confirmations upon processing completion, providing timely updates to recruiters. Features - Workflow Automated Resume Screening Workflow This workflow leverages Groq Llama4 for intelligent resume analysis, speeding the screening process by generating a matching score, result (shortlisted/rejected/kiv), and key insights/rationale into their suitability for provided job description. Step-by-Step Process: Monitors Google Drive:** Listens and checks for new resume cv in google drive . Retrieve Resume:** Downloads the CV resumes from google drive . Extract Resume Data:* Extract *text content** from CV resume PDF files Extract Job Description Data:* Extract *text content** from job description Analyze with Groq:** Generate a matching score based on job requirements. [SCORE: 1-10] Provide decision into their job suitability. [SHORTLISTED/REJECTED/KIV] Provide actionable insights into their job suitability. [REASON] This ensures a fast, efficient, and accurate screening process, eliminating manual evaluation. Setup Guide Step-by-Step Instructions Ensure all credentials are ready and setup (groq, gdrive ,gmail, gsheet, gdoc) View official n8n documentation on node setup accordingly. See also the notes of setup . Folder & File Setup 1. Create a google-drive folder like this View directory example 2. Create a job description like this View file example 3. Configure a tracker like this ( Candidate Name, AI Score,AI Verdict, AI Reason) View file example email conversations report as you like. You are ready to go!
by Mark de Jonge
About the workflow The workflow reads every reply that is received from a cold email campaign and qualifies if the lead is interested in a meeting. If the lead is interested, a deal is made in pipedrive. You can add as many email inboxes as you need! Setup: Add credentials to the Gmail, OpenAI and Pipedrive Nodes. Add a in_campaign field in Pipedrive for persons. In Pipedrive click on your credentials at the top right, go to company settings > Data fields > Person and click on add custom field. Single option [TRUE/FALSE]. If you have only one email inbox, you can delete one of the Gmail nodes. If you have more than two email inboxes, you can duplicate a Gmail node as many times as you like. Just connect it to the Get email node, and you are good to go! In the Gmail inbox nodes, select Inbox under label names and uncheck Simplify.
by Ranjan Dailata
Who this is for? The Automate Etsy Data Mining with Bright Data Scrape & Google Gemini workflow is designed for eCommerce analysts, product researchers, and AI developers seeking to extract actionable insights from Etsy listings at scale. It is ideal for: eCommerce Entrepreneurs** - Researching product demand and competition. Market Analysts** - Tracking pricing, reviews, and trends across Etsy categories. Product Managers** - Identifying niche opportunities and design inspirations. Data Scientists & AI Engineers** - Automating product intelligence pipelines. Growth Hackers** - Leveraging Etsy insights to refine product-market fit. What problem is this workflow solving? Manually browsing Etsy to analyze product listings, pricing, reviews, and seller activity is slow, inconsistent, and unscalable. Scraping Etsy requires unlocking JavaScript-heavy content and structuring noisy data for analysis. This workflow solves: Automated and scalable scraping of Etsy product listings using Bright Data’s infrastructure. A fully paginated data structured Estry production data extraction via the Google Gemini LLM. Enables faster decision-making for product research and competitive analysis via the fully automated paginated data extraction. What this workflow does Receives input: Sets the Esty URL for the data extraction and analysis. Uses Bright Data's Web Unlocker to extract content from relevant sites. Cleans and preprocesses the scraped content for readability. Sends the content to Google Gemini for: Enriched results including: Data persistence over the disk. Sends the response to a target system via Webhook notification. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Set Esty Search Query for setting the brand content URL and the Bright Data Zone name. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Input Sources** : Replace the static URL with dynamic input from Google Sheets, Webhook, or Airtable to research multiple niches. Prompt Customization** : Adjust Gemini prompts to extract specific insights for example: List key features of the product Summarization of the review themes Data Output Options** : Update the Webhook notification to save data to: Google Sheets Notion or Airtable SQL/NoSQL Slack/Email