by Immanuel
Automated Raw Materials Inventory Management with Google Sheets, Supabase, and Gmail using n8n Webhooks Description What Problem Does This Solve? 🛠️ This workflow automates raw materials inventory management for businesses, eliminating manual stock updates, delayed material issue approvals, and missed low stock alerts. It ensures real-time stock tracking, streamlined approvals, and timely notifications. Target audience: Small to medium-sized businesses, inventory managers, and n8n users familiar with Google Sheets, Supabase, and Gmail integrations. What Does It Do? 🌟 Receives raw material data and issue requests via form submissions. Updates stock levels in Google Sheets and Supabase. Manages approvals for material issue requests with email notifications. Detects low stock levels and sends alerts via Gmail. Maintains data consistency across Google Sheets and Supabase. Key Features Real-time stock updates from form submissions. Automated approval process for material issuance. Low stock detection with Gmail notifications. Dual storage in Google Sheets and Supabase for redundancy. Error handling for robust data validation. Setup Instructions Prerequisites n8n Instance**: Self-hosted or cloud n8n instance. API Credentials**: Google Sheets API: Credentials from Google Cloud Console with Sheets scope, stored in n8n credentials. Supabase API: API key and URL from Supabase project, stored in n8n credentials (do not hardcode in nodes). Gmail API: Credentials from Google Cloud Console with Gmail scope. Forms**: A form (e.g., Google Form) to submit raw material receipts and issue requests, configured to send data to n8n webhooks. Installation Steps Import the Workflow: Copy the workflow JSON from the “Template Code” section (to be provided). Import it into n8n via “Import from File” or “Import from URL”. Configure Credentials: Add API credentials in n8n’s Credentials section for Google Sheets, Supabase, and Gmail. Assign credentials to respective nodes. For example: In the Append Raw Materials node, use Google Sheets credentials: {{ $credentials.GoogleSheets }}. In the Current Stock Update node, use Supabase credentials: {{ $credentials.Supabase }}. In the Send Low Stock Email Alert node, use Gmail credentials. Set Up Nodes: Webhook Nodes (Receive Raw Materials Webhook, Receive Material Issue Webhook): Configure webhook URLs and link them to your form submissions. Approval Email (Send Approval Request): Customize the HTML email template if needed. Low Stock Alerts (Send Low Stock Email Alert, Send Low Stock Email After Issue): Configure recipient email addresses. Test the Workflow: Submit a test form for raw material receipt and verify stock updates in Google Sheets/Supabase. Submit a material issue request, approve/reject it, and confirm stock updates and notifications. How It Works High-Level Steps Receive Raw Materials: Processes form submissions for raw material receipts. Update Stock: Updates stock levels in Google Sheets and Supabase. Handle Issue Requests: Processes material issue requests via forms. Manage Approvals: Sends approval requests and processes decisions. Monitor Stock Levels: Detects low stock and sends Gmail alerts. Detailed Descriptions Detailed node descriptions are available in the sticky notes within the workflow screenshot (to be provided). Below is a summary of key actions. Node Names and Actions Raw Materials Receiving and Stock Update Receive Raw Materials Webhook**: Receives raw material data from a form submission. Standardize Raw Material Data**: Maps form data into a consistent format. Calculate Total Price**: Computes Total Price (Quantity Received * Unit Price). Append Raw Materials**: Records receipt in Google Sheets. Check Quantity Received Validity**: Ensures Quantity Received is valid. Lookup Existing Stock**: Retrieves current stock for the Product ID. Check If Product Exists**: Branches based on Product ID existence. Calculate Updated Current Stock**: Adds Quantity Received to stock (True branch). Update Current Stock**: Updates stock in Google Sheets (True branch). Retrieve Updated Stock for Check**: Retrieves updated stock for low stock check. Detect Low Stock Level**: Flags if stock is below minimum. Trigger Low Stock Alert**: Triggers email if stock is low. Send Low Stock Email Alert**: Sends low stock alert via Gmail. Add New Product to Stock**: Adds new product to stock (False branch). Current Stock Update**: Updates Supabase Current Stock table. New Row Current Stock**: Inserts new product into Supabase. Search Current Stock**: Retrieves Supabase stock records. New Record Raw**: Inserts raw material record into Supabase. Format Response**: Removes duplicates from Supabase response. Combine Stock Update Branches**: Merges branches for existing/new products. Material Issue Request and Approval Receive Material Issue Webhook**: Receives issue request from a form submission. Standardize Data**: Normalizes request data and adds Approval Link. Validate Issue Request Data**: Ensures Quantity Requested is valid. Verify Requested Quantity**: Validates Product ID and Submission ID. Append Material Request**: Records request in Google Sheets. Check Available Stock for Issue**: Retrieves current stock for the request. Prepare Approval**: Checks stock sufficiency for the request. Send Approval Request**: Emails approver with Approve/Reject options. Receive Approval Response**: Captures approver’s decision via webhook. Format Approval Response**: Processes approval data with Approval Date. Verify Approval Data**: Validates the approval response. Retrieve Issue Request Details**: Retrieves original request from Google Sheets. Process Approval Decision**: Branches based on approval action. Get Stock for Issue Update**: Retrieves stock before update (Approved). Deduct Issued Stock**: Reduces stock by Approved Quantity (Approved). Update Stock After Issue**: Updates stock in Google Sheets (Approved). Retrieve Stock After Issue**: Retrieves updated stock for low stock check. Detect Low Stock After Issue**: Flags low stock after issuance. Trigger Low Stock Alert After Issue**: Triggers email if stock is low. Send Low Stock Email After Issue**: Sends low stock alert via Gmail. Update Issue Request Status**: Updates request status (Approved/Rejected). Combine Stock Lookup Results**: Merges stock lookup branches. Create Record Issue**: Inserts issue request into Supabase. Search Stock by Product ID**: Retrieves Supabase stock records. Issues Table Update**: Updates Supabase Materials Issued table. Update Current Stock**: Updates Supabase stock after issuance. Combine Issue Lookup Branches**: Merges issue lookup branches. Search Issue by Submission ID**: Retrieves Supabase issue records. Customization Tips Expand Storage Options **: Add nodes to store data in other databases (e.g., Airtable) alongside Google Sheets and Supabase. Modify Approval Email **: Update the Send Approval Request node to customize the HTML email template (e.g., adjust styling or add branding). Alternative Notifications **: Add nodes to send low stock alerts via other platforms (e.g., Slack or Telegram). Adjust Low Stock Threshold **: Modify the Detect Low Stock Level node to change the Minimum Stock Level (default: 50).!
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Automated Resume Job Matching Engine is an intelligent workflow designed for career platforms, HR tech startups, recruiting firms, and AI developers who want to streamline job-resume matching using real-time data from LinkedIn and job boards. This workflow is tailored for: HR Tech Founders** - Building next-gen recruiting products Recruiters & Talent Sourcers** - Seeking automated candidate-job fit evaluation Job Boards & Portals** - Enriching user experience with AI-driven job recommendations Career Coaches & Resume Writers** - Offering personalized job fit analysis AI Developers** - Automating large-scale matching tasks using LinkedIn and job data What problem is this workflow solving? Manually matching a resume to job description is time-consuming, biased, and inefficient. Additionally, accessing live job postings and candidate profiles requires overcoming web scraping limitations. This workflow solves: Automated LinkedIn profile and job post data extraction using Bright Data MCP infrastructure Semantic matching between job requirements and candidate resume using OpenAI 4o mini Pagination handling for high-volume job data End-to-end automation from scraping to delivery via webhook and persisting the job matched response to disk What this workflow does Bright Data MCP for Job Data Extraction Uses Bright Data MCP Clients to extract multiple job listings (supports pagination) Pulls job data from LinkedIn with the pre-defined filtering criteria's OpenAI 4o mini LLM Matching Engine Extracts paginated job data from the Bright Data MCP extracted info via the MCP scrape_as_html tool. Extracts textual job description information via the scraped job information by leveraging the Bright Data MCP scrape_as_html tool. AI Job Matching node handles the job description and the candidate resume compare to generate match scores with insights Data Delivery Sends final match report to a Webhook Notification endpoint Persistence of AI matched job response to disk Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the OpenAi account credentials. In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the Set input fields for candidate resume, keywords and other filtering criteria's. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Target Different Job Boards Set input fields with the sites like Indeed, ZipRecruiter, or Monster Customize Matching Criteria Adjust the prompt inside the AI Job Match node Include scoring metrics like skills match %, experience relevance, or cultural fit Automate Scheduling Use a Cron Node to periodically check for new jobs matching a profile Set triggers based on webhook or input form submissions Output Customization Add Markdown/PDF formatting for report summaries Extend with Google Sheets export for internal analytics Enhance Data Security Mask personal info before sending to external endpoints
by Keith Rumjahn
Who's this for? Anyone who wants to improve the SEO of their website Umami users who want insights on how to improve their site SEO managers who need to generate reports weekly Case study Watch youtube tutorial here Get my SEO A.I. agent system here You can read more about how this works here. How it works This workflow calls the Umami API to get data Then it sends the data to A.I. for analysis It saves the data and analysis to Baserow How to use this Input your Umami credentials Input your website property ID Input your Openrouter.ai credentials Input your baserow credentials You will need to create a baserow database with columns: Date, Summary, Top Pages, Blog (name of your blog). Future development Use this as a template. There's alot more Umami stats you can pull from the API. Change the A.I. prompt to give even more detailed analysis. Created by Rumjahn
by Niklas Hatje
Use Case This workflow retrieves all members of a Discord server or guild who have a specific role. Due to limitations in the Discord API, it only returns a limited number of users per call. To overcome this, the workflow uses Google Sheets to track which user we received last to return all Members (of a certain role) from a Discord server in batches of 100 members. Setup Add your Google Sheets and Discord credentials. Create a Google Sheets document that contains ID as a column. We're using this to remember which member we received last. Edit the fields in the setup node Setup: Edit this to get started. You can read up on how to get the Discord IDs via this link. Link to your Discord server in the Discord nodes Activate the workflow Call the production webhook URL in your browser Requirements Admin rights in the Discord server and access to the developer portal of discord Google Sheets Minimum n8n version 1.28.0 Potential Use cases Writing a direct message to all members of a certain role Analysing user growth on Discord regularly Analysing role distributions on Discord regularly Saving new members in a Discord ... Keywords Discord API, Getting all members from Discord via API, Google Sheets and Discord automation, How to get all Discord members via API
by Marvin Wu
Who is this for? This workflow is designed for n8n users and developers who need to automate the documentation process of their n8n workflows. It's particularly useful for teams looking to streamline their documentation efforts and ensure consistency across their workflow documentation. What problem is this workflow solving? / Use case The primary problem this workflow addresses is the manual and time-consuming process of creating documentation for n8n workflows. It automates the generation of concise, clear, and comprehensive documentation directly from the workflow's JSON, making it easier for both technical and non-technical users to understand what the workflow does and how it operates. What this workflow does Upon receiving a form submission with the workflow title and JSON, this workflow automatically generates documentation that includes: A brief introduction to the workflow. The trigger mechanism (webhook URLs for test and production environments, or cron schedules). Setup requirements, including necessary credentials and external dependencies. Setup Credentials Setup: Ensure you have OpenAI API credentials configured in n8n to use the GPT model for generating documentation text. Form Submission: Users must submit the form with the workflow title and JSON. The form is accessible via: Test URL: domain/form-test/{webhookId} Production URL: domain/form/{webhookId} How to customize this workflow to your needs Modify Trigger URLs**: Adjust the webhook or form URLs based on your domain and specific n8n setup. Customize Documentation Template**: Edit the OpenAI node's prompt to change the structure or details of the generated documentation. Extend Functionality**: Add nodes to integrate with other systems (e.g., automatically publishing the documentation to a wiki or sending it via email). This workflow simplifies the documentation process, making it accessible and manageable for teams of all sizes and technical abilities. By automating documentation, it ensures that all workflows are properly documented, enhancing understanding and efficiency within teams.
by Joey D’Anna
This template will create a nightly backup of all your n8n workflows to a Dropbox folder. Each night, the previous night's backups are moved into an "old" folder, and renamed with the date they were taken. Backups over a specified age are deleted. (this is disabled by default for safety until you manually enable and verify it with your own setup) Prerequisites Dropbox account and credentials A destination folder for backups Setup Update all dropbox nodes with your credential Edit the Schedule Trigger node with the desired time to run the backup Edit the DESTINATION FOLDER node to specify the path in dropbox to upload to. This should be a folder and include the trailing / If you want to automatically purge old backups Edit the PURGE DAYS node to specify the age to purge Enable the PURGE DAYS node, and the 3 subsequent nodes Enable the workflow to run on the specified schedule
by Oskar
This workflow uses AI to analyze the content of every new message in Gmail and then assigns specific labels, according to the context of the email. Default configuration of the workflow includes 3 labels: „Partnership” - email about sponsored content or cooperation, „Inquiry” - email about products, services, „Notification” - email that doesn't require response. You can add or edit labels and descriptions according to your use case. 🎬 See this workflow in action in my YouTube video about automating Gmail. How it works? Gmail trigger performs polling every minute for new messages (you can change the trigger interval according to your needs). The email content is then downloaded and forwarded to an AI chain. 💡 The prompt in the AI chain node includes instructions for applying labels according to the email content - change label names and instructions to fit your use case. Next, the workflow retrieves all labels from the Gmail account and compares them with the label names returned from the AI chain. Label IDs are aggregated and applied to processed email messages. ⚠️ Label names in the Gmail account and workflow (prompt, JSON schema) must be the same. Set up steps Set credentials for Gmail and OpenAI. Add labels to your Gmail account (e.g. „Partnership”, „Inquiry” and „Notification”). Change prompt in AI chain node (update list of label names and instructions). Change list of available labels in JSON schema in parser node. Optionally: change polling interval in Gmail trigger (by default interval is 1 minute). If you like this workflow, please subscribe to my YouTube channel and/or my newsletter.
by Mario
Purpose This ensures that executions of scheduled workflows do not overlap when they take longer than expected. How it works This is a separate workflow which monitors the execution of the main workflow Stores a flag in Redis (key dynamically named after workflow ID) which indicates if the main workflow is running or idle Only calls the main workflow if the last execution has finished Setup Update the credentials suitable for your Redis instance Replace the Schedule Trigger of your main workflow by an Execute Workflow Trigger Copy the workflow ID from the URL Paste the workflow ID in the Execute Workflow Node of this workflow Configure the Schedule Trigger Node
by algopi.io
Who is this template for? This workflow template is designed for Marketing and pre-Sales to get prospects from a form like Tally, decline data in the famous opensource CRM (SuiteCRM), synchronize contact in Brevo with linking the id from CRM, and then notify in NextCloud. Bonus : validate email with ++CaptainVerify++ and notify in NextCloud depending on response How it works For each submission in the form, a webhook is triggered. A check of the email is done with CaptainVerify. Depending on the response, and if it is ok, then a Lead is created in SuiteCRM. Else, a message in your selected discussion is sent. As the lead has been created, we can create a contact in Brevo (for future campain), ank link this contact with the lead_id from the CRM in a dedicated field. Finaly, a message in your selected discussion in NextCloud informs you about the lead. Set up instructions Complete the Set up credentials step when you first open the workflow. You'll need a CaptainVerify account (Api Key), a dedicated SuiteCRM user with Oauth, a Brevo account (Api Key) and a Nextcloud account. Set up the Webhook in the form's tool of your choice (why not Tally ?). Set each node with the explanations in sticky Notes. Enjoy ! Template was created in n8n v1.44.1
by Jimleuk
This n8n workflow is designed to work on the local network and assists with reconciling downloaded bank statements with internal tenant records to quickly highlight any issues with payments such as missed or late payments or those of incorrect amounts. This assistant can then generate a report to quick flag attention to ensure remedial action is taken. How it works The workflow monitors a local network drive to watch for new bank statements that are added. This bank statement is then imported into the n8n workflow, its contents extracted and sent to the AI Agent. The AI Agent analyses the line items to identify the dates and any incoming payments from tenants. The AI agent then uses an locally-hosted Excel ("XLSX") spreadsheet to get both tenant records and property records. From this data, it can determine for each active tenant when payment is due, the amount and the tenancy duration. Comparing to the bank statement, the AI Agent can now report on where tenants have missed their payments, made late payments or are paying the incorrect amounts. The final report is generated and logged in the same XLSX for a human to check and action. Requirements A self-hosted version of n8n is required. OpenAI account for the AI model Customising this workflow If you organisation has a Slack or Teams account, consider sending reports to a channel for increased productivity. Email may be a good choice too. Want to go fully local? A version of this workflow is available which uses Ollama instead. You can download this template here: https://drive.google.com/file/d/1YRKjfakpInm23F_g8AHupKPBN-fphWgK/view?usp=sharing
by Hubschrauber
Overview This template integrates an IOT multi-button switch (meant for controlling a dimmable light) with Spotify playback functions, via MQTT messages. This isn't likely to work without some tinkering, but should be a good head start on receiving/routing IOT/MQTT messages and hooking up to a Spotify-like API. Requirements An IOT device capable of generating events that can be delivered as MQTT messages through an MQTT Broker e.g. Ikea Strybar remote An MQTT Broker to which n8n can connect and consume messages e.g. Zigbee2MQTT in HomeAssistant A Spotify developer-account (which provides access to API functions via OAuth2 authorization) A Spotify user-account (which provides access to Spotify streamed content, user settings, etc.) Setup Create an MQTT Credential item in n8n and assign it to the MQTT Trigger node Modify the MQTT trigger node to match the topic for your IOT device messages Modify the switch/router nodes to map to the message text from your IOT button (e.g. arrow_left_click, brightness_up_click, etc.) Create a Spotify developer-account (or use the login for a user-account) Create an "App" in the developer-account to represent the n8n workflow Chicken/Egg ALERT: The n8n Spotify Credentials dialog box will display the "OAuth Redirect URL" required to create the App in Spotify, but the n8n Credential item itself cannot be created until AFTER the App has been created. Create a Spotify Credentials item in n8n Open the Settings on the Spotify App to find the required Client ID and Client Secret information. ALERT: Save this before proceeding to the Connect step. Connect the n8n Spotify Credential item to the Spotify user-account ALERT: Expect n8n to open a separate OAuth2 window on authorization.spotify.com here, which may require a login to the Spotify user-account Open each of the HTTP and Spotify nodes, one by one, and re-assign to your Spotify Credential (try not to miss any). (Then, probably, upvote this feature request: https://community.n8n.io/t/select-credentials-via-expression/5150 Modify the variable values in the Globals node to match your own environment. target_spotify_playback_device_name - The name of a playback device available to the Spotify user-account favorite_playlist_name - The name of a playlist to start when one of the button actions is indicated in the MQTT message. Used in example "Custom Function 2" sequence. Notes You're on your own for getting the multi-button remote switch talking to MQTT, figuring out what the exact MQTT topic name is, and mapping the message parts to the workflow (actions, etc.). The next / previous actions are wired up to not transfer control to the target device. This alternative routing just illustrates a different behavior than the remaining actions/functions, which include activation of the target device when required. Some of the Spotify API interactions use the Spotify node in n8n, but many of the available Spotify API functions are limited or not implemented at all in the Spotify node. So, in other cases, a regular HTTP node is used with the Spotify OAuth2 API credential instead. By modifying one of the examples included in the template, it should be possible to call nearly anything the Spotify API has to offer. Spotify+n8n OAuth Mini-Tutorial Definitions The developer-account is the Spotify login for creating a spotify-app which will be associated with a client id and client secret. The user-account is the Spotify login that has permission to stream songs, set up playback devices, etc. ++A spotify-login allows access to a Spotify user-account, or a Spotify developer-account, OR BOTH++ The spotify-app, which has a client id and client secret, is an object created in the developer-account. The app-implementation (in this case, an ++n8n workflow++) uses the spotify-app's credentials (client id / client secret) to call Spotify API endpoints on behalf of a user-account. Using One Spotify Login as Both User and Developer When an n8n Spotify-node or HTTP-node (i.e. an app-implementation) calls a Spotify API endpoint, the Credentials item may be using the client id and client secret from a spotify-app, which was created in a developer-account that is ++one and the same spotify-login as the user-account++. However, it helps to remind yourself that from the Spotify API server's perspective, the developer-account + spotify-app, and the user-account, are ++two independent entities++. n8n Spotify-OAuth2-API Credential Authorization Process The 2 layers/steps, in the process of authorizing an n8n Spotify-OAuth2-API credential to make API calls, are: n8n must identify itself to Spotify as the app-implementation associated with the developer-account/spotify-app by sending the app's credentials (client id and client secret) to Spotify. The Client ID and Client Secret are supplied in the n8n Spotify OAuth2 Credentials UI/dialog-box Separately, n8n must obtain an authorization token from Spotify to represent the permissions granted by the user to execute actions (call API endpoints) on behalf of the user (i.e. access things that belong to the user-account). This authorization for the user-account access is obtained when the "Connect" or "Reconnect" button is clicked in the n8n Spotify Credentials UI/dialog-box (which pops up a separate authorization UI/browser-window managed by Spotify). The Authorization for a given spotify-app stays "registered" in the user-account until revoked. See: https://support.spotify.com/us/article/spotify-on-other-apps/ Direct Link: https://www.spotify.com/account/apps/ More than one user-account can be authorized for a given spotify-app. A particular n8n Spotify-OAuth2-API credential item appears to cache an authorization token for the user-account that was most recently authorized. Up to 25 users can be allowed access to a spotify-app in Developer-Mode, but any user-account other than the one associated with the developer-account must be added by email address at https://developer.spotify.com/dashboard/{{app-credential-id}}/users ALERT: IF the browser running the n8n UI is ALSO logged into a Spotify account, and the spotify-app is already authorized for that Spotify account, the "reconnect" button in the Spotify-OAuth2-API credential dialog may automatically grab a token for that logged in user-account, offering no opportunity to select a different user-account. This can be managed somewhat by using "incognito" browser windows for n8n, Spotify, or both. References n8n Spotify Credentials Docs Spotify Authorization Docs
by Jimleuk
This n8n template demonstrates how to calculate the evaluation metric "Correctness" which in this scenario, measures the compares and classifies the agent's response against a set of ground truths. The scoring approach is adapted from the open-source evaluations project RAGAS and you can see the source here https://github.com/explodinggradients/ragas/blob/main/ragas/src/ragas/metrics/_answer_correctness.py How it works This evaluation works best where the agent's response is allowed to be more verbose and conversational. For our scoring, we classify the agent's response into 3 buckets: True Positive (in answer and ground truth), False Positive (in answer but not ground truth) and False Negative (not in answer but in ground truth). We also calculate an average similarity score on the agent's response against all ground truths. The classification and the similarity score is then averaged to give the final score. A high score indicates the agent is accurate whereas a low score could indicate the agent has incorrect training data or is not providing a comprehensive enough answer. Requirements n8n version 1.94+ Check out this Google Sheet for a sample data https://docs.google.com/spreadsheets/d/1YOnu2JJjlxd787AuYcg-wKbkjyjyZFgASYVV0jsij5Y/edit?usp=sharing