by InfraNodus
Set Up ElevenLabs Voice Chat Agent using Graph RAG Knowledge Graphs as Experts This workflow creates an AI voice chatbot agent that has access to several knowledge bases at the same time (used as "experts"). These knowledge bases are provided using the InfraNodus GraphRAG using the knowledge graphs and providing high-quality responses without the need to set up complex RAG vector store workflows. We use ElevenLabs to set up a voice agent that can be embedded to any website or used via their API. The advantages of using GraphRAG instead of the standard vector stores for knowledge are: Easy and quick to set up (no complex data import workflows needed) and to update with new knowledge A knowledge graph has a holistic overview of your knowledge base Better retrieval of relations between the document chunks = higher quality responses Ability to reuse in other n8n workflows How it works This template uses the n8n AI agent node as an orchestrating agent that decides which tool (knowledge graph) to use based on the user's prompt. The user's prompt is received from the ElevenLabs Conversational AI agent via an n8n Webhook, which also takes care of the voice interaction. The response from n8n is then sent to the Webhook, which is polled by the ElevenLabs voice agent. This agent processes the response and provides the final answer. Here's a description step by step: The user submits a question using ElevenLabs voice interface The question is sent via the knowledge_base tool in ElevenLabs to the n8n Webhook with the POST request containing the user's prompt and sessionID for Chat Memory node in n8n. The n8n AI agent node checks a list of tools it has access to. Each tool has a description of the knowledge auto-generated by InfraNodus (we call each tool an "expert"). The n8n AI agent decides which tool should be used to generate a response. It may reformulate user's query to be more suitable for the expert. The query is then sent to the InfraNodus HTTP node endpoint, which will query the graph that corresponds to that expert. Each InfraNodus GraphRAG expert provides a rich response that takes the whole context into account and provides a response from each expert (graph) along with a list of relevant statements retrieved using a combination or RAG and GraphRAG. The n8n AI Agent node integrates the responses received from the experts to produce the final answer. The final answer is sent back to the Webhook endpoint ElevenLabs conversational AI agent picks up the response arriving from the knowledge_base tool via the webhook and then condenses it for conversational format and transforms text into voice. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Create a separate knowledge graph for each expert (using PDF / content import options) in InfraNodus For each graph, go to the workflow, paste the name of the graph into the body name field. Keep other settings intact or learn more about them at the InfraNodus access points page. Once you add one or more graphs as experts to your flow, add the LLM key to the OpenAI node and launch the workflow You will also need to set up an ElevenLabs account and to set up a conversational AI agent there. See the Post note in the n8n workflow for a complete step-by-step description or our support article on setting up ElevenLabs AI voice agent Once the voice AI agent is ready, you might want to combine it with a text AI chatbot workflow so your users have a choice between the text and voice interaction. In that case, you may be interested to use our free open-source website popup chat widget popupchat.dev where you can create an embed code to add to your blog or website and allow the user to choose between the text and voice interaction. Requirements An InfraNodus account and API key An OpenAI (or any other LLM) API key An ElevenLabs account FAQ 1. How many "experts" should I aim for? We recommend to aim for the number of experts as the optimal number of people in a team, which is usually 2-7. If you add more experts, your AI orchestrating agent will have troubles choosing the most suitable "expert" tool for the user's query. You can mitigate this by specifying in the AI agent description that it can choose maximum 3-7 experts to provide a response. 2. Why use InfraNodus GraphRAG and not standard vector store for knowledge? First, vector stores are complex to set up and to update. You'd need a separate workflow for that, decide on the vector dimensions, add metadata to your knowledge, etc. With InfraNodus, you have a complete RAG / GraphRAG solution under the hood that is easy to set up and provides high-quality responses that takes the overall structure and the relations between your ideas into account. 3 Why not use ElevenLabs' own knowledge? One of the reasons is that you want your knowledge base to be in one place so you can reuse it in other n8n workflows. Another reason is that you will not have such a good separation between the "experts" when you converse with the agent. So the answers you get will be based on top matches from all the books / articles you upload, while with the InfraNodus GraphRAG setup you can better control which graphs are consulted as experts and have an explicit way to display this data. Customizing this workflow You can use this same workflow with a Telegram bot, so you can interact with it using Telegram. There are many more customizations available on our GitHub repo for n8n workflows. Check out the complete setup guide for this workflow at https://support.noduslabs.com/hc/en-us/articles/20318967066396-How-to-Build-a-Text-Voice-AI-Agent-Chatbot-with-n8n-Elevenlabs-and-InfraNodus Also check out the video tutorial with a demo:
by Akash Kankariya
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 🎯 Overview This n8n workflow template automates the process of monitoring Instagram comments and sending predefined responses based on specific comment keywords. It integrates Instagram's Graph API with Google Sheets to manage comment responses and maintains an interaction log for customer relationship management (CRM) purposes. 🔧 Workflow Components The workflow consists of 9 main nodes organized into two primary sections: 📡 Section 1: Webhook Verification ✅ Get Verification (Webhook node) 🔄 Respond to Verification Message (Respond to Webhook node) 🤖 Section 2: Auto Comment Response 📬 Insta Update (Webhook node) ❓ Check if update is of comment? (Switch node) 👤 Comment if of other user (If node) 📊 Comment List (Google Sheets node) 💬 Send Message for Comment (HTTP Request node) 📝 Add Interaction in Sheet (CRM) (Google Sheets node) 🛠️ Prerequisites and Setup Requirements 1. 🔵 Meta/Facebook Developer Setup 📱 Create Facebook App > 📋 Action Items: > - [ ] Navigate to Facebook Developers > - [ ] Click "Create App" and select "Business" type > - [ ] Configure the following products: > - ✅ Instagram Graph API > - ✅ Facebook Login for Business > - ✅ Webhooks 🔐 Required Permissions Configure the following permissions in your Meta app: | instagram_basic | 📖 Read Instagram account profile info and media | instagram_manage_comments | 💬 Create, delete, and manage comments | instagram_manage_messages | 📤 Send and receive Instagram messages | pages_show_list | 📄 Access connected Facebook pages 🎫 Access Token Generation > ⚠️ Important Setup:+ > - [ ] Use Facebook's Graph API Explorer > - [ ] Generate a User Access Token with required permissions > - [ ] ⚡ Important: Tokens expire periodically and need refreshing 2. 🌐 Webhook Configuration 🔗 Setup Webhook URL > 📌 Configuration Checklist: > - [ ] In Meta App Dashboard, navigate to Products → Webhooks > - [ ] Subscribe to Instagram object > - [ ] Configure webhook URL: your-n8n-domain/webhook/instagram > - [ ] Set verification token (use "test" or create secure token) > - [ ] Select webhook fields: > - ✅ comments - For comment notifications > - ✅ messages - For DM notifications (if needed) ✅ Webhook Verification Process The workflow handles Meta's webhook verification automatically: 📡 Meta sends GET request with hub.challenge parameter 🔄 Workflow responds with the challenge value to confirm subscription 3. 📊 Google Sheets Setup Example - https://docs.google.com/spreadsheets/d/1ONPKJZOpQTSxbasVcCB7oBjbZcCyAm9gZ-UNPoXM21A/edit?usp=sharing 📋 Create Response Management Sheet Set up a Google Sheets document with the following structure: 📝 Sheet 1 - Comment Responses: | Column | Description | Example | |--------|-------------|---------| | 💬 Comment | Trigger keywords | "auto", "info", "help" | | 📝 Message | Corresponding response message | "Thanks for your comment! We'll get back to you soon." | 📈 Sheet 2 - Interaction Log: | Column | Description | Purpose | |--------|-------------|---------| | ⏰ Time | Timestamp of interaction | Track when interactions occur | | 🆔 User Id | Instagram user ID | Identify unique users | | 👤 Username | Instagram username | Human-readable identification | | 📝 Note | Additional notes or error messages | Debugging and analytics | 🔧 Built By - akash@codescale.tech
by Yang
Who is this for? This workflow is perfect for eCommerce teams, market researchers, and product analysts who want to track or extract product information from websites that restrict scraping tools. It’s also useful for virtual assistants handling product comparison tasks. What problem is this workflow solving? Many eCommerce and retail sites use dynamic content or anti-bot protections that make traditional scraping methods unreliable. This workflow bypasses those issues by taking a screenshot of the full page, using OCR to extract visible text, and summarizing product information with GPT-4o—all fully automated. What this workflow does This workflow monitors a Google Sheet for new URLs. Once a new link is added, it performs the following steps: Trigger on New URL in Sheet – Watches for new rows added to a Google Sheet. Screenshot URL via Dumpling AI – Sends the URL to Dumpling AI’s screenshot endpoint to capture a full-page image of the product webpage. Save Screenshot to Drive Folder – Uploads the screenshot to a specific Google Drive folder for reference or logging. Extract Text from Screenshot with Dumpling AI – Uses Dumpling AI’s image-to-text endpoint to pull all visible content from the screenshot. Extract Product Info from Screenshot Text with GPT-4o – Sends the extracted raw text to GPT-4o, prompting it to identify structured product information such as product name, price, ratings, deals, and purchase options. Split Each Product Entry – Splits the GPT response (an array of product objects) so each product becomes an individual item for saving. Save Products info to Google Sheet – Appends each product’s structured details to a separate sheet in the same spreadsheet. Setup Google Sheet Create a Google Sheet with at least two sheets: Sheet1 should contain a header row with a column labeled URL. Sheet2 should contain headers: Product Name, price, purchased, ratings, deal, buyingOptions. Connect your Google account in both the trigger and final write-back node. Dumpling AI Sign up at Dumpling AI Create an API key and use it for both HTTP modules: Screenshot URL via Dumpling AI Extract Text from Screenshot with Dumpling AI The screenshot endpoint used is https://app.dumplingai.com/api/v1/screenshot. Google Drive Create a folder for storing screenshots. In the Save Screenshot to Drive Folder node, select the correct folder or provide the folder ID. Make sure permissions allow uploading from n8n. OpenAI Provide an API key for GPT-4o in the Extract Product Info from Screenshot Text with GPT-4o node. The prompt is structured to return structured product listings in JSON format. Split & Save Split Each Product Entry takes the array of product objects from GPT and makes each one a separate execution. Save Products info to Google Sheet writes structured fields into Sheet2 under: Product Name, price, purchased, ratings, deal, buyingOptions. How to customize this workflow Adjust the GPT prompt to return different product fields (e.g., shipping info, product categories). Use a filter node to limit which types of products get written to the final sheet. Add sentiment analysis to analyze review content if available. Replace Google Drive with Dropbox or another file storage app. Notes Make sure you monitor your API usage on both Dumpling AI and OpenAI to avoid rate limits. This setup is great for snapshot-based extraction where scraping is blocked or unreliable.
by David Olusola
n8n Set Node Tutorial - Complete Guide 🎯 How It Works This tutorial workflow teaches you everything about n8n's Set node through hands-on examples. The Set node is one of the most powerful tools in n8n - it allows you to create, modify, and transform data as it flows through your workflow. What makes this tutorial special: Progressive Learning**: Starts simple, builds to complex concepts Interactive Examples**: Real working nodes you can modify and test Visual Guidance**: Sticky notes explain every concept Branching Logic**: Shows how Set nodes work in different workflow paths Real Data**: Uses practical examples you'll encounter in automation The workflow demonstrates 6 core concepts: Basic data types (strings, numbers, booleans) Expression syntax with {{ }} and $json references Complex data structures (objects and arrays) "Keep Only Set" option for clean outputs Conditional data setting with branching logic Data transformation and aggregation techniques 📋 Setup Steps Step 1: Import the Workflow Copy the JSON from the code artifact above Open your n8n instance in your browser Navigate to Workflows section Click "Import from JSON" or the import button (usually a "+" or import icon) Paste the JSON into the import dialog Click "Import" to load the workflow Save the workflow (Ctrl+S or click Save button) Step 2: Choose Your Starting Point Option A: Default Tutorial Mode (Recommended for beginners) The workflow is ready to run as-is Uses simple "Welcome" message as starting data Click "Execute Workflow"** to begin Option B: Rich Test Data Mode (Recommended for experimentation) Locate the nodes: Find "Start (Manual Trigger)" and "0. Test Data Input" Disconnect default: Click the connection line between "Start (Manual Trigger)" → "1. Set Basic Values" and delete it Connect test data: Drag from "0. Test Data Input" output to "1. Set Basic Values" input Execute: Click "Execute Workflow" to run with rich test data Step 3: Execute and Learn Run the workflow: Click the "Execute Workflow" button Check outputs: Click on each node to see its output data Read the notes: Each sticky note explains what's happening Follow the flow: Data flows from left to right, top to bottom Step 4: Experiment and Modify Try These Experiments: 🔧 Change Basic Values: Click on "1. Set Basic Values" Modify user_age (try 20 vs 35) Change user_name to see how it propagates Execute and see the changes flow through 📊 Test Conditional Logic: Set user_age to 20 → triggers "Student Discount" path Set user_age to 30 → triggers "Premium Access" path Watch how the workflow branches differently 🎨 Modify Expressions: In "2. Set with Expressions", try changing: ={{ $json.score * 2 }} to ={{ $json.score * 3 }} ={{ $json.user_name }} Smith to ={{ $json.user_name }} Johnson 🏗️ Complex Data Structures: In "3. Set Complex Data", modify the JSON structure Add new properties to the user_profile object Try nested expressions 🎓 Learning Path Beginner Level (Nodes 1-2) Focus**: Understanding basic Set operations Learn**: Data types, static values, simple expressions Time**: 10-15 minutes Intermediate Level (Nodes 3-4) Focus**: Complex data and output control Learn**: Objects, arrays, "Keep Only Set" option Time**: 15-20 minutes Advanced Level (Nodes 5-6) Focus**: Conditional logic and data aggregation Learn**: Branching workflows, merging data, complex expressions Time**: 20-25 minutes 🔍 What Each Node Teaches | Node | Concept | Key Learning | |------|---------|-------------| | 1. Set Basic Values | Data Types | String, number, boolean basics | | 2. Set with Expressions | Dynamic Data | {{ }} syntax, $json references, $now functions | | 3. Set Complex Data | Advanced Structures | Objects, arrays, nested properties | | 4. Set Clean Output | Data Management | "Keep Only Set" for clean final outputs | | 5a/5b. Conditional Sets | Branching Logic | Different data based on conditions | | 6. Tutorial Summary | Data Aggregation | Combining and summarizing workflow data | 💡 Pro Tips 🚀 Quick Wins: Always check node outputs after execution Use sticky notes as your learning guide Experiment with small changes first Copy nodes to try variations 🛠️ Advanced Techniques: Use Keep Only Set for API responses Combine static and dynamic data in complex objects Leverage conditional paths for different user types Reference nested object properties with dot notation 🐛 Troubleshooting: If expressions don't work, check the {{ }} syntax Ensure field names match exactly (case-sensitive) Use the expression editor for complex logic Check data types match your expectations 🎯 Next Steps After Tutorial Create your own Set nodes in a new workflow Practice with real data from APIs or databases Build data transformation workflows for your specific use cases Combine Set nodes with other n8n nodes like HTTP, Webhook, etc. Explore advanced expressions using JavaScript functions Congratulations! You now have the foundation to use Set nodes effectively in any n8n workflow. The Set node is truly the "Swiss Army knife" of n8n automation! 🛠️
by Sherlockes
What does this template help with? Save the data of activities recorded and stored in Strava to a Google Sheets document. How it works: We have a Google Sheets spreadsheet where each row represents a Strava activity with the date, reference, distance, time, and elevation. Periodically, the workflow checks the latest activities in our Strava account to see if any are missing from the spreadsheet and adds them to the list. All fields must be properly formatted according to how they are stored in the Google Sheets spreadsheet. Set up instructions Complete the Set up credentials step when you first open the workflow. You'll need a Google Sheets and Strava account. In the 'activities' node, you must enter the name of the file and the sheet where you want to save the imported data. In the 'Strava' node, you must select the corresponding credential. You can adjust the format of dates, times, and distances according to your needs in the 'strava_last' node. The rest of the information is available at sherblog.es Template was created in n8n v1.72.1
by Jonathan
You still can use the app in a workflow even if we don’t have a node for that or the existing operation for that. With the HTTP Request node, it is possible to call any API point and use the incoming data in your workflow Main use cases: Connect with apps and services that n8n doesn’t have integration with Web scraping How it works This workflow can be divided into three branches, each serving a distinct purpose: 1.Splitting into Items (HTTP Request - Get Mock Albums): The workflow initiates with a manual trigger (On clicking 'execute'). It performs an HTTP request to retrieve mock albums data from "https://jsonplaceholder.typicode.com/albums." The obtained data is split into items using the Item Lists node, facilitating easier management. 2.Data Scraping (HTTP Request - Get Wikipedia Page and HTML Extract): Another branch of the workflow involves fetching a random Wikipedia page using an HTTP request to "https://en.wikipedia.org/wiki/Special:Random." The HTML Extract node extracts the article title from the fetched Wikipedia page. 3.Handling Pagination (The final branch deals with handling pagination for a GitHub API request): It sends an HTTP request to "https://api.github.com/users/that-one-tom/starred," with parameters like the page number and items per page dynamically set by the Set node. The workflow uses conditions (If - Are we finished?) to check if there are more pages to retrieve and increments the page number accordingly (Set - Increment Page). This process repeats until all pages are fetched, allowing for comprehensive data retrieval.
by Aitor | 1Node
Elevate your Stripe workflows with an AI agent that intelligently, securely, and interactively handles essential Stripe data operations. Leveraging the Kimi K2 model via OpenRouter, this n8n template enables safe data retrieval. From fetching summarized financial insights to managing customer discounts, while strictly enforcing privacy, concise outputs, and operational boundaries. 🧾 Requirements Stripe: Active Stripe account API key with read and write access. n8n: Deployed n8n instance (cloud or self-hosted) OpenRouter: Active OpenRouter account with credit API key from OpenRouter 🔗 Useful Links Stripe n8n Stripe Credentials Setup OpenRouter 🚦 Workflow Breakdown Trigger: User Request Workflow initiates when an authenticated user sends a message in the chat trigger. AI Agent (Kimi K2 OpenRouter): Intent Analysis Determines whether the user wants to: List customers, charges, or coupons Retrieve the account’s balance Create a new coupon in Stripe Filters unsupported or unclear requests, explaining permissions or terminology as needed. Stripe Data Retrieval For data queries: Only returns summarized, masked lists (e.g., last 10 transactions/customers) Sensitive details, such as card numbers, are automatically masked or truncated Never exposes or logs confidential information Coupon Creation When a coupon creation is requested: AI agent collects coupon parameters (discount, expiration, restrictions) Clearly summarizes the action and requires explicit user confirmation before proceeding Creates the coupon upon confirmation and replies with only the public-safe coupon details 🛡️ Privacy & Security No data storage:** All responses are ephemeral; sensitive Stripe data is never retained. Strict minimization:** Outputs are tightly scoped; only partial identifiers are shown and only when necessary. Retention rules enforced:** No logs, exports, or secondary storage of Stripe data. Confirmation required:** Actions modifying Stripe (like coupon creation) always require the user to approve before execution. Compliance-ready:** Aligned with Stripe and general data protection standards. ⏱️ Setup Steps Setup time: 10–15 minutes Add Stripe API credentials in n8n Add the OpenRouter API credentials in n8n and select your desired AI model to run the agent. In our template we selected Kimi K2 from Moonshot AI. ✅ Summary This workflow template connects a privacy-prioritized AI agent (Kimi K2 via OpenRouter) with your Stripe account to enable: Fast, summarized access to customer, transaction, coupon, and balance data Secure, confirmed creation of discounts/coupons Complete adherence to authorization, privacy, and operational best practices 🙋♂️ Need Help? Feel free to contact us at 1 Node Get instant access to a library of free resources we created.
by Adam Bertram
An AI-powered chat assistant that analyzes Azure virtual machine activity and generates detailed timeline reports showing VM state changes, performance metrics, and operational events over time. How It Works The workflow starts with a chat trigger that accepts user queries about Azure VM analysis. A Google Gemini AI agent processes these requests and uses six specialized tools to gather comprehensive VM data from Azure APIs. The agent queries resource groups, retrieves VM configurations and instance views, pulls performance metrics (CPU, network, disk I/O), and collects activity log events. It then analyzes this data to create timeline reports showing what happened to VMs during specified periods, defaulting to the last 90 days unless the user specifies otherwise. Prerequisites To use this template, you'll need: n8n instance (cloud or self-hosted) Azure subscription with virtual machines Microsoft Azure Monitor OAuth2 API credentials Google Gemini API credentials Proper Azure permissions to read VM data and activity logs Setup Instructions Import the template into n8n. Configure credentials: Add Microsoft Azure Monitor OAuth2 API credentials with read permissions for VMs and activity logs Add Google Gemini API credentials Update workflow parameters: Open the "Set Common Variables" node Replace <your azure subscription id here> with your actual Azure subscription ID Configure triggers: The chat trigger will automatically generate a webhook URL for receiving chat messages No additional trigger configuration needed Test the setup to ensure it works. Security Considerations Use minimum required Azure permissions (Reader role on subscription or resource groups). Store API credentials securely in n8n credential store. The Azure Monitor API has rate limits, so avoid excessive concurrent requests. Chat sessions use session-based memory that persists during conversations but doesn't retain data between separate chat sessions. Extending the Template You can add more Azure monitoring tools like disk metrics, network security group logs, or Application Insights data. The AI agent can be enhanced with additional tools for Azure cost analysis, security recommendations, or automated remediation actions. You could also integrate with alerting systems or export reports to external storage or reporting platforms.
by Jimleuk
Note: This template only works for self-hosted n8n. This n8n template demonstrates how to use the Langchain code node to track token usage and cost for every LLM call. This is useful if your templates handle multiple clients or customers and you need a cheap and easy way to capture how much of your AI credits they are using. How it works In our mock AI service, we're offering a data conversion API to convert Resume PDFs into JSON documents. A form trigger is used to allow for PDF upload and the file is parsed using the Extract from File node. An Edit Fields node is used to capture additional variables to send to our log. Next, we use the Information Extractor node to organise the Resume data into the given JSON schema. The LLM subnode attached to the Information Extractor is a custom one we've built using the Langchain Code node. With our custom LLM subnode, we're able to capture the usage metadata using lifecycle hooks. We've also attached a Google Sheet tool to our LLM subnode, allowing us to send our usage metadata to a google sheet. Finally, we demonstrate how you can aggregate from the google sheet to understand how much AI tokens/costs your clients are liable for. Check out the example Client Usage Log - https://docs.google.com/spreadsheets/d/1AR5mrxz2S6PjAKVM0edNG-YVEc6zKL7aUxHxVcffnlw/edit?usp=sharing How to use SELF-HOSTED N8N ONLY** - the Langchain Code node is only available in the self-hosted version of n8n. It is not available in n8n cloud. The LLM subnode can only be attached to non-"AI agent" nodes; Basic LLM node, Information Extractor, Question & Answer Chain, Sentiment Analysis, Summarization Chain and Text Classifier. Requirements Self-hosted version of n8n OpenAI for LLM Google Sheets to store usage metadata Customising this template Bring the custom LLM subnode into your own templates! In many cases, it can be a drop-in replacement for the regular OpenAI subnode. Not using Google Sheets? Try other databases or a HTTP call to pipe into your CRM.
by Ranjan Dailata
Who is this for? This workflow is designed for HR professionals, employer branding teams, talent acquisition strategists, market researchers, and business intelligence analysts who want to monitor, understand, and act upon employee sentiment and company perception on Glassdoor. It's ideal for organizations that value real-time feedback, are tracking employer brand perception, or need summarized insights for leadership reporting without sifting through thousands of raw reviews. What problem is this workflow solving? Manually reviewing and analyzing Glassdoor reviews is tedious, subjective, and not scalable especially for larger companies or those with many subsidiaries. This workflow: Automates review collection by making a Glassdoor company request via the Bright Data Web Scrapper API. Uses Google Gemini to summarize the content. Sends an actionable summary to HR dashboards, leadership teams, or alert systems via the Webhook notification. What this workflow does Makes an HTTP Request to Glassdoor via the Bright Data Web Scrapper API. Polls the BrightData Glassdoor for the completion of the request. Downloads the Glassdoor response when a new snapshot is ready. Sends the prompt to Google Gemini for summarization. Delivers the summarized insights (strengths, weaknesses, sentiment, patterns) to a configured webhook or dashboard endpoint. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). A webhook or endpoint to receive the summary (e.g., Slack, Notion, or custom HR dashboard). How to customize this workflow to your needs Change Summary Focus by updating the Summarization of Glassdoor Response node Summarization methods and prompts to extract specific insights: Cultural feedback Leadership issues Compensation comments Exit motivation Update the HTTP Request to Glassdoor node with a specific Glassdoor Company information that you are looking for. Format the output to produce a customized summary to Markdown or HTML for rich delivery. Integrate with HR Systems BambooHR, Workday, SAP SuccessFactors via API. Google Sheets or Airtable
by ikbendion
Reddit Poster to Discord This workflow checks Reddit every 15 minutes for new posts and sends selected posts to a Discord channel via webhook. Flow Overview: Schedule Trigger Runs every 15 minutes. Fetch Latest Posts Retrieves up to 3 new posts from any subreddit. Filter Posts Skips moderator or announcement posts based on author ID. Fetch Full Post Data Gets full details for the remaining post. Extract Image URL Parses the post to extract a direct image link. Send to Discord Sends the post title, image, and link to a Discord webhook. Setup Notes: Create a Reddit app and connect credentials in n8n. Add your subreddit name to both Reddit nodes. Connect a Discord webhook for posting.
by Angel Menendez
Who is this for? This workflow is perfect for HR teams, recruiters, and hiring platforms that need to automate the extraction of key candidate details—like name, email, skills, and education—from resume files submitted in various formats. What problem does this solve? Manually reviewing and extracting structured data from resumes is time-consuming and error-prone. This automation eliminates that bottleneck, standardizing candidate data for seamless integration into CRMs, applicant tracking systems, or Google Sheets. What this workflow does This n8n template listens for uploaded resume files, detects their format (PDF, DOC, TXT, CSV, etc.), and automatically extracts the raw text using n8n’s built-in file extraction tools. The extracted text is then parsed using an OpenAI-powered agent that returns structured fields such as: Full Name Email Address Skill Keywords Education Details Optionally, you can push the structured output to Google Sheets (node included, currently disabled). Setup Clone this workflow into your n8n instance. Enable the When chat message received trigger if using n8n chat. Provide your OpenAI credentials and enable the LangChain Agent node. (Optional) Connect Google Sheets by authenticating with your Google account and filling in your target document and sheet. Watch the setup and demo video here: 🎥 https://youtu.be/2SUPiNmLWdA How to customize Modify the OpenAI system message to extract different fields (e.g., phone number, LinkedIn). Replace the Google Sheets node with a webhook to push results to your ATS. Add filters to limit accepted file types or max file size. > ⚠️ This template is designed to be secure. It uses credentials stored in the n8n credential manager—no hardcoded secrets required.