by Mark Shcherbakov
Video Guide I prepared a detailed guide that showed the whole process of building a resume analyzer. Who is this for? This workflow is ideal for recruitment agencies, HR professionals, and hiring managers looking to automate the initial screening of CVs. It is especially useful for organizations handling large volumes of applications and seeking to streamline their recruitment process. What problem does this workflow solve? Manually screening resumes is time-consuming and prone to human error. This workflow automates the process, providing consistent and objective analysis of CVs against job descriptions. It helps filter out unsuitable candidates early, reducing workload and improving the overall efficiency of the recruitment process. What this workflow does This workflow automates the resume screening process using OpenAI for analysis. It provides a matching score, a summary of candidate suitability, and key insights into why the candidate fits (or doesn’t fit) the job. Retrieve Resume: The workflow downloads CVs from a direct link (e.g., Supabase storage or Dropbox). Extract Data: Extracts text data from PDF or DOC files for analysis. Analyze with OpenAI: Sends the extracted data and job description to OpenAI to: Generate a matching score. Summarize candidate strengths and weaknesses. Provide actionable insights into their suitability for the job. Setup Preparation Create Accounts: N8N: For workflow automation. OpenAI: For AI-powered CV analysis. Get CV Link: Upload CV files to Supabase storage or Dropbox to generate a direct link for processing. Prepare Artifacts for OpenAI: Define Metrics: Identify the metrics you want from the analysis (e.g., matching percentage, strengths, weaknesses). Generate JSON Schema: Use OpenAI to structure responses, ensuring compatibility with your database. Write a Prompt: Provide OpenAI with a clear and detailed prompt to ensure accurate analysis. N8N Scenario Download File: Fetch the CV using its direct URL. Extract Data: Use N8N’s PDF or text extraction nodes to retrieve text from the CV. Send to OpenAI: URL: POST to OpenAI’s API for analysis. Parameters: Include the extracted CV data and job description. Use JSON Schema to structure the response. Summary This workflow provides a seamless, automated solution for CV screening, helping recruitment agencies and HR teams save time while maintaining consistency in candidate evaluation. It enables organizations to focus on the most suitable candidates, improving the overall hiring process.
by KumoHQ
Who is this template for? This workflow template is designed for any professionals seeking relevent data from database using natural language. How it works Each time user ask's question using the n8n chat interface, the workflow runs. Then the message is processed by AI Agent using relevent tools - Execute SQL Query, Get DB Schema and Tables List and Get Table Definition, if required. Agent uses these tool to form and run sql query which are necessary to answer the questions. Once AI Agent has the data, it uses it to form answer and returns it to the user. Set up instructions Complete the Set up credentials step when you first open the workflow. You'll need a Postgresql Credentials, and OpenAI api key. Template was created in n8n v1.77.0
by Jaruphat J.
Who is this for? This workflow is perfect for digital content creators, marketers, and social media managers who regularly create engaging short-form videos featuring inspirational or motivational quotes. While the workflow is universally applicable, it specifically highlights Thai as an example to demonstrate effective language and font integration. What problem is this workflow solving? Creating consistent and engaging multilingual video content manually, including attractive fonts and proper video formatting, is time-consuming and repetitive. Additionally, managing files, background music, and updating statuses manually can be tedious and prone to errors. What this workflow does Automatically fetches background video and music files stored on Google Drive. Randomly selects a quote (demonstrated with Thai language) and author information from Google Sheets. Dynamically combines the selected quote and author text using appealing fonts, such as the Thai font "Kanit," directly onto the video using FFmpeg on your n8n local environment. Creates visually engaging videos with a 9:16 aspect ratio, optimized for YouTube Shorts and other vertical video platforms. Automatically uploads the finalized video to YouTube. Updates the status and YouTube URL back into your Google Sheet, ensuring you have up-to-date records. Setup Requirements: This workflow requires a self-hosted n8n instance, as the execution of FFmpeg commands is not supported on n8n Cloud. Ensure FFmpeg is installed on your self-hosted environment. Google Sheets Setup: Your Google Sheet must include at least these columns: Index: (Unique identifier for each quote) Quote: (Text of the quote) Author: (Author of the quote) CreateStatus: (Track video creation status; values like 'DONE' or blank for pending) YoutubeURL: (Automatically updated after upload) To help you get started quickly, you can use this template spreadsheet. Next steps: Organize your video and music files in separate folders in Google Drive. Authenticate your Google Sheets, Google Drive, and YouTube accounts in n8n. Ensure fonts compatible with your target languages (such as Kanit for Thai) are available in your FFmpeg installation. How to customize this workflow to your needs Fonts:** Adjust font styles and sizes within the workflow's code node. Ensure the fonts you choose fully support the language you wish to use. Quote Management:** Easily add or remove quotes and authors in your Google Sheets document. Media Files:** Change or update background videos and music by modifying the files in your Google Drive folders. Video Specifications:** Customize video dimensions, text positioning, opacity, and music volume directly in the provided FFmpeg commands. Benefits of Using Localized Fonts and Quotes Utilizing fonts specific to your target language, as demonstrated with Thai, significantly increases audience engagement by making your content more relatable, shareable, and visually appealing. Ensure you select fonts that properly support the language you're targeting.
by Jimleuk
This n8n template monitors an Outlook mailbox for invoices, automatically parses/extracts data from them and then uploads the output to an Excel Workbook. One of my top workflow requests, this template can save many hours of manual labour for you or your finance/accounts team. How it works A scheduled trigger is set to fetch recent Outlook messages to the Accounts receivable mailbox. Each message is analysed to determine whether or not it from a supplier and is issuing/contains an invoice. For each valid message, the attachments are downloaded and non-invoice documents are filtered out via AI Vision classification. Invoices are then processed through a AI vision model again to extract the details. The extracted data can then be used for reconciliation or otherwise. For this demonstration, we'll just append the row to an Excel sheet for now. How to use Ensure your Microsoft365 credential points to the correct mailbox. If a shared folder is used, toggle "shared folder" option to "on" and for the principal ID, use the email address. If you receive lots of other types of messages such as replies and forwards, you may want to implement additional checks to prevent processing invoices twice. The "remove duplicates" node can help with this. Requirements Outlook for Mailbox Google Gemini for Document Understanding and Invoice Extraction Excel for Data Storage Customising this workflow Note the assumption for this template is that all invoices will come as a PDF attachment. In real life, this is rarely the case! Adding in document conversion to cover all invoice formats. Human feedback is also an important factor in AI workflows. Try tagging emails as a way to notify team members that the invoice was processed.
by irfan saeed
Auto-Generate YouTube Chapters with AI-Powered Transcript Analysis Overview This workflow uses YouTube Data API v3 and Google Gemini 1.5 Flash AI to automatically generate timestamped chapters for videos by analyzing SRT captions. It enhances viewer navigation, improves SEO , and saves creators time by automating manual tasks. Prerequisites YouTube API Setup Create a Google Cloud Project Go to the Google Cloud Console. Click Select a project > New Project and name it (e.g., "YouTube Chapters Automation") . Enable YouTube Data API v3 Navigate to APIs & Services > Library. Search for "YouTube Data API v3" and click Enable . Configure OAuth Consent Screen Go to APIs & Services > OAuth consent screen. Select External (public) or Internal (testing), then add required details (app name, support email) . Generate OAuth 2.0 Credentials Under Credentials, click Create Credentials > OAuth client ID. Choose Web app, then download the JSON key file . Add Credentials to n8n Other Requirements Google Gemini API**: Configure access for the gemini-1.5-flash-8b-exp-0924 model by getting the api key. Workflow Steps Set Video ID Input the target video ID (e.g., r1wqsrW2vmE) using the Set Video ID node. Fetch Video Metadata Use the YouTube API node to retrieve the video’s title, category, and existing description . Download SRT Captions Get Caption ID: Call https://www.googleapis.com/youtube/v3/captions to fetch the caption track ID . Download Transcript: Use the ID to retrieve SRT data via https://www.googleapis.com/youtube/v3/captions/{{ID}}?tfmt=srt . Analyze Transcript with Gemini AI Process the SRT file with Google Gemini AI to identify chapters using a prompt like: "Classify this transcript into timestamped chapters (e.g., 00:00 - Introduction)." Validate output with a structured parser (e.g., Structured Captions node) . Update Video Description Append chapters to the description using the YouTube API’s videos.update method . Value Proposition Viewer Experience**: Chapters improve navigation and reduce drop-off rates . SEO Benefits**: Structured descriptions enhance search visibility . Time Savings**: Eliminates manual chapter creation .
by Ranjan Dailata
Who this is for? This workflow is designed for professionals and teams who need real-time, structured insights from Google Search results without manual effort. What problem is this workflow solving? This n8n workflow solves the problem of automating Google Search result extraction, cleanup, summarization, and AI-enhanced formatting for downstream use like sending the results to a webhook or another system. What this workflow does Automates Google Search via Bright Data Uses Bright Data’s proxy-based SERP API to run a Google Search query programmatically. Makes the process repeatable and scriptable with different search terms and regions/zones. Cleans and Extracts Useful Content The Google Search Data Extractor uses LLM based cleaning to remove HTML/CSS/JS from the response and extract pure text data. Converts messy, unstructured web content into structured, machine-readable format. Summarizes Search Results Through the Gemini Flash + Summarization Chain, it generates a concise summary of the search results. Ideal for users who don’t have time to read full pages of search results. Formats Data Using AI Agent The AI Agent acts like a virtual assistant that: Understands search results Formats them in a readable, JSON-compatible form Prepares them for webhook delivery Delivers Results to Webhook Sends the final summary + structured search result to a webhook (could be your app, a Slack bot, Google Sheets, or CRM). Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Google Search query as you wish by navigating to the Set Google Search Query node. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize This Workflow to your needs 1. Change the Search Input Default: It searches a fixed query or dataset. Customize: Accept input from a Google Sheet, Airtable, or a form. Auto-trigger searches based on keywords or schedules. 2. Customize Summarization Style (LLM Output) Default: General summary using Google Gemini or OpenAI. Customize: Add tone: formal, casual, technical, executive-summary, etc. Focus on specific sections: pricing, competitors, FAQs, etc. Translate the summaries into multiple languages. Add bullet points, pros/cons, or insight tags. 3.Choose Where the Results Go Options: Email, Slack, Notion, Airtable, Google Docs, or a dashboard. Auto-create content drafts for WordPress or newsletters. Feed into CRM notes or attach to Salesforce leads.
by Ranjan Dailata
Who this is for? This workflow automates the process of Wikipedia data extraction using the Bright Data Web Unlocker, parsing and cleaning the data, and then sending the results to a specified webhook URL for downstream processing, reporting, or integration. What problem is this workflow solving? Researchers who need structured information from Wikipedia pages regularly. Data Engineers building knowledge bases or enriching datasets with factual data. Digital Marketers or Content Writers automating fact-checking or content sourcing. Automation Enthusiasts who want to trigger external systems with rich context from Wikipedia. What this workflow does This workflow addresses the challenges of manually retrieving, structuring, and using data from Wikipedia at scale. Workflow Breakdown Trigger Type: Scheduled or Manual Purpose: Starts the workflow either on a fixed schedule (e.g., daily) or on-demand via a manual trigger or incoming webhook. Bright Data Wikipedia Scraping Tool Used: Bright Data Web Unlocker Action: Scrape the HTML content of one or multiple Wikipedia article URLs. Parse & Extract Structured Data The Basic LLM Chain node is responsible for producing a human readable content. Summarization Summarize the Wikipedia content by utilizing the Summarization Chain node. Send to Webhook Initiates a Webhook notification to the specified URL as part of the "Summary Webhook Notifier" node. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Set Wikipedia URL with Bright Data Zone node with the Wikipedia URL and Bright Data Zone. Update the Summary Webhook Notifier node with the Webhook endpoint of your choice. How to customize this workflow to your needs Update Wikipedia URL Replace with your own Wikipedia URL of your interest. Make sure to set the Wikipedia URL as part of the "Set Wikipedia URL with Bright Data Zone" node. Modify Data Extraction Logic Extract entire article content or just specific sections by extending the "LLM Data Extractor" node prompt. Extend AI Summarization Extract key bullet points or entities. Create short-form summaries by extending the "Concise Summary Generator" node. Extend Summary Webhook Notifier Send to Slack, Discord, Telegram, MS Teams via the Webhook notification mechanism. Connect to your internal database/API via the Webhook notification mechanism.
by Ifeoluwa Ajetomobi
This workflow helps you stay updated with daily launches on Product Hunt. It automatically fetches product details (name, tagline, description, and website), checks if the website redirects to another URL, and logs the final information into a Google Sheet. Perfect for indie hackers, product managers, content curators, and anyone tracking daily launches. How It Works Schedule Trigger – Runs the workflow daily. Set Date – Captures today’s date in ISO format for filtering Product Hunt posts. HTTP Request (Product Hunt API) – Retrieves Product Hunt posts for the day using GraphQL. Extract Product Info (Code Node) – Parses the response to pull key details: Name Tagline Description Website URL HTTP Request (URL Check) – Follows each website URL to detect if it redirects. Merge Data – Combines product info with the final destination URL. Google Sheets Node – Appends all processed product info to your sheet. Pre-conditions A valid Product Hunt API token A Google account with access to Google Sheets A Google Sheet already created with the correct columns (see below) Connected Google Sheets and HTTP credentials in n8n Google Sheets Setup Your spreadsheet should include the following columns (in order): Name Tagline Description Original URL Final URL (after redirect) Ensure your Google Sheets node uses the correct Spreadsheet ID and Sheet Name. Setup Instructions Product Hunt API Auth: Replace {{YOUR_PRODUCT_HUNT_API_KEY}} in the HTTP Request headers: { "Authorization": "Bearer {{YOUR_PRODUCT_HUNT_API_KEY}}" } Google Sheets Node: Connect your Google account. Insert your Spreadsheet ID in the settings. Specify the sheet name (e.g., Daily Launches). Use the “Append” operation and map the 5 data fields accordingly. Notes Only fetches the first 10 posts for the day (can be extended). Consider adding Slack, Discord, or Email nodes to notify you of new entries. Useful for building launch databases, research, or content inspiration.
by Yang
Who is this for? This workflow is built for marketers, sales teams, agencies, virtual assistants, and anyone who regularly researches or contacts local businesses. It's ideal for building lead lists, tracking competitors, or creating location-specific outreach campaigns. What problem is this workflow solving? Instead of manually searching Google Maps and copying business info into spreadsheets, this automation pulls structured business data (e.g. restaurants, gyms, service providers) and logs it directly into Google Sheets. It saves hours of work and ensures cleaner, more usable data. What this workflow does The workflow takes a Google Maps search query (like "best restaurants in New York") and sends it to Dumpling AI. It returns a list of places including their name, address, website, phone number, rating, and more. Each result is split into a row and automatically added to a Google Sheet. Setup Dumpling AI Sign up at Dumpling AI Generate your API key In the HTTP Request node, select Header Auth and paste your key in the Authorization field Google Sheets Create a sheet with tab name Leads Add the following column headers to row 1: Name, Address, Phone number, Website, Rating, Price Level, Type, Booking Link, Position Connect your Google Sheets account and link this sheet in the node Customize the Query In the HTTP node, replace the query string (e.g., "best+restaurants+in+New+York") with your own search term Run It Use the manual trigger to test Optionally swap in a Schedule or Webhook node to run it automatically How to customize this workflow to your needs Change the search query to target different cities or business types Use filters to only save leads with a minimum rating or price level Add GPT to summarize listings or qualify leads Swap Google Sheets for Airtable or a CRM system for deeper integration
by Alex Kim
Automatically convert documents from Google Drive into vector embeddings using OpenAI, LangChain, and PGVector — fully automated through n8n. ⚙️ What It Does This workflow monitors a Google Drive folder for new files, supports multiple file types (PDF, TXT, JSON), and processes them into vector embeddings using OpenAI’s text-embedding-3-small model. These embeddings are stored in a Postgres database using the PGVector extension, making them query-ready for semantic search or RAG-based AI agents. After successful processing, files are moved to a separate “vectorized” folder to avoid duplication. 💡 Use Cases Powering Retrieval-Augmented Generation (RAG) AI agents Semantic search across private documents AI assistant knowledge ingestion Automated document pipelines for indexing or classification 🧠 Workflow Highlights Trigger Options:** Manual or Scheduled (3 AM daily by default) Supported File Types:** PDF, TXT, JSON Embedding Stack:** LangChain Text Splitter, OpenAI Embeddings, PGVector Deduplication:** Files are moved after processing License:** CC BY-SA 4.0 Author:** AlexK1919 🛠 What You’ll Need Google Drive OAuth2** credentials (connected to Search Folder, Download File, and Move File nodes) OpenAI API Key** (used in the Embeddings OpenAI node) Postgres + PGVector** database (connected in the Postgres PGVector Store node) 🔧 Step-by-Step Setup Instructions Create Google OAuth2 credentials in n8n and connect them to all Google Drive nodes. Set your source folder ID in the Search Folder node — this is where incoming files are placed. Set your processed folder ID in the Move File node — files will be moved here after vectorization. Ensure you have a PGVector-enabled Postgres instance and input the table name and collection in the Postgres PGVector Store node. Add your OpenAI credentials to the Embeddings OpenAI node and select text-embedding-3-small. Optional: Activate the Schedule Trigger node to run daily or configure your own schedule. Run manually by triggering When clicking ‘Test workflow’ for on-demand ingestion. 🧩 Customization Tips Want to support more file types or enhance the pipeline? Add new extractors**: Use Extract from File with other formats like DOCX, Markdown, or HTML. Refine logic by file type**: The Switch node routes files to the correct extraction method based on MIME type (application/pdf, text/plain, application/json). Pre-process with OCR**: Add an OCR step before extraction to handle scanned PDFs or images. Add filters**: Enhance the Search Folder or Switch node logic to skip specific files or folders. 📄 License This workflow is available under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. You are free to use, adapt, and share this workflow for non-commercial purposes under the terms of this license. Full license details: https://creativecommons.org/licenses/by-nc-sa/4.0/
by Xavier
This workflow creates nested Google Drive folders from a path string (like Projects/Clients/Reports). It automatically handles the necessary folder lookups and creation steps required by Google Drive, then outputs the final folder's ID for immediate use. How it works This workflow streamlines the creation of nested folders in Google Drive: Input: Provide a root_folder_id and a path (e.g., Projects/Clients/Reports) as input. Path Parsing: The workflow splits the path into individual folder names (based on the / separator) Iterative Check & Create: Loops through each part of your path: Searches within the current parent folder (starting with the root_folder_id) for a subfolder matching the name. If found: Retrieves the existing folder's ID to use as the parent for the next iteration. If not found: Creates a new folder with that name inside the current parent folder and uses the new folder's ID as the parent for the next iteration. Output: Returns the Google Drive Folder ID of the very last folder in the specified path (e.g., the ID for Reports in the example above). This ID can then be directly used in subsequent n8n nodes to upload files, create documents, or perform other actions within that specific folder. Set up steps Setting up this workflow requires configuring the connection to Google Drive and knowing where to start creating folders: Connect Google Drive Account: Ensure you have a Google Drive credential configured in your n8n instance. Then link your credentials in the workflow: there are 2 Google Drive nodes that will need to be updated. Identify Starting Folder ID: Determine the Google Drive Folder ID where your nested structure should begin. You can either use the root of your Google Drive or a specific folder: To use the root of Google Drive, simply set root_folder_id to root (also called "My Drive" in the UI) To use a specific folder, open the folder in a webbrowser and look at the URL. The folder ID will be in the last part of the URL: https://drive.google.com/drive/folders/THIS_IS_THE_FOLDER_ID. Prepare Inputs for Execution: When running the workflow (or triggering it), you will need to provide: google_drive_folder_id -> this is the root folder ID you identified in step 2. desired_path -> This is the path you want to create (e.g., Projects/Clients/Reports). Here's an example of how you can call this workflow in your other workflows:
by Yang
👥 Who is this for? This workflow is ideal for virtual assistants, researchers, developers, automation specialists, and data analysts who need to regularly extract and organize structured product information (like books) from a website. It’s especially useful for those working with catalog-based websites who want to automate extraction and delivery of clean, sorted data. 🧩 What problem is this solving? Manually copying product listings like book titles and prices from a website into a spreadsheet is slow and repetitive. This automation solves that problem by scraping content using Dumpling AI, extracting the right data using CSS selectors, and formatting it into a clean CSV file that is sent to your email—all triggered automatically when a new URL is added to Google Sheets. ⚙️ What this workflow does This template automates an entire content scraping and delivery process: Watches a Google Sheet for new URLs Scrapes the HTML content of the given webpage using Dumpling AI Uses CSS selectors in the HTML node to extract each book from the page Splits the HTML array into individual items Extracts the book title and price from each HTML block Sorts the books in descending order based on price Converts the sorted data to a CSV file Sends the CSV via email using Gmail 🛠️ Setup Google Sheets Create a sheet titled something like URLs Add your product listing URLs (e.g., http://books.toscrape.com) Connect the Google Sheets trigger node to your sheet Ensure you have proper credentials connected Dumpling AI Create an account at Dumpling AI) - Generate your API key Set the HTTP Method to POST and pass the URL dynamically from the Google Sheet Use Header Auth to include your API key in the request header Make sure "cleaned": "True" is included in the body for optimized HTML output HTML Node The first HTML node extracts the main book container blocks using: .row > li The second HTML node parses out the individual fields: title: h3 > a (via the title attribute) price: .price_color Sort Node Sorts books by price in descending order Note: price is extracted as a string, ensure it's parsable if you plan to use numeric filtering later Convert to CSV The JSON data is passed into a Convert node and transformed into a CSV file Gmail Sends the CSV as an attachment to a designated email 🔄 How to customize this workflow Extract more data**: Add more CSS selectors in the second HTML node to pull fields like author, availability, or product links Switch destinations**: Replace Gmail with Slack, Google Drive, Dropbox, or another platform Adjust sorting**: Sort alphabetically or based on another extracted value Use a different source**: As long as the site structure is consistent, this can scrape any listing-like page Trigger differently**: Use a webhook, form submission, or schedule trigger instead of Google Sheets ⚠️ Dependencies and Notes This workflow uses Dumpling AI to perform the web scraping. This requires an API key and uses credits per request. The HTML node depends on valid CSS selectors. If the site layout changes, the selectors may need to be updated. Ensure you’re not scraping content from websites that prohibit automated scraping.