Workflow Templates
Discover and use pre-built workflows to automate your tasks
2318 templates found
Discover and use pre-built workflows to automate your tasks
2318 templates found
by Nskha
This N8N workflow automates the process of sharing files from Google Drive. It includes OAuth2 authentication, batch processing, public link generation, and access status modification for efficient file handling. Suitable for users seeking to streamline their Google Drive file sharing process. sutiable for bulk actions, tested on 4.2K files folder working like charm. How It Works Initialize Workflow: The process begins with a Manual Trigger, allowing the user to start the workflow at their convenience. Folder ID Specification: A 'Set Folder ID' node where the user can enter the desired Google Drive Folder ID. List Files from Google Drive: The 'Google Drive' node lists all files within the specified folder using OAuth2 authentication. Batch Processing: The 'Loop Over Items' node processes the files in batches for efficiency. Generate Public Links: The 'Generate Download Links' node creates downloadable links for each file. Change File Access: The 'Change Status' node alters the file status to make them publicly accessible. Merge and Output: A 'Merge' node consolidates the data, preparing it for further actions or output. Set Up Steps Estimated Time**: The setup should take approximately 10-15 minutes. Initial Setup**: You'll need to provide OAuth2 credentials for Google Drive and specify a folder ID. Customization**: Adjust the batch size and file access permissions according to your needs. Detailed Descriptions**: For specific configuration details, refer to the sticky notes within the workflow. Example Item output { "link": "https://drive.google.com/u/3/uc?id=1hojqPfXchNTY8YRTNkxSo-8txK9re-V4&export=download&confirm=t&authuser=0", "name": "firefox_rNjA0ybKu7.png", "kind": "drive#permission", "id": "anyoneWithLink", "type": "anyone", "role": "reader", "allowFileDiscovery": false } You can store the output data with any data store node you want, for example save them into Excel Sheet or Airtable etc... Keywords: n8n workflow, Google Drive integration, file sharing automation, batch file processing, public link generation, OAuth2 authentication, workflow automation
by Nskha
Overview This n8n workflow is specifically designed to monitor USDT TRC20 transactions within a specified wallet. It utilizes the public blockchain database of TronScan, requiring no API authentication, to periodically check and process transaction data. This workflow is ideal for users who need an automated solution to track their TRC20 wallet transactions. Features Automated Tracking**: Executes every 15 minutes to capture new transactions. Customizable Filters**: Tailors the tracking based on specific parameters like transaction time and wallet addresses. Data Aggregation**: Compiles transaction data into a single, structured list. Formatted Outputs**: Presents transaction data in an organized and comprehensible format. Requirements N8N (self-hosted or cloud version) setup and operational. Basic understanding of N8N workflows and nodes. Setup and Configuration Import Workflow: Load the provided JSON workflow into your N8N instance. Configure Edit Fields Node: Enter your TRC20 wallet address in the 'Your Wallet Address' field. Adjust 'Number of transactions to retrieve per request' if necessary. (Default one set to 20 which is recommanded) TronScan Data Access: The workflow accesses TronScan's public blockchain data, so no additional configuration is required for API access. Schedule Trigger Node: Defaulted to trigger every 15 minutes. Modify as per your requirements. Test the Workflow: Execute the workflow manually to ensure everything is operating correctly. How it Works Schedule Trigger: Initiates the workflow at predetermined intervals. Edit Fields: Sets up the wallet address and transaction retrieval count. TronScan Data Retrieval: Gathers transaction data from the TRC20 wallet using TronScan's public database. Split Out & Filter: Processes and filters the transaction data. Final Results: Organizes and formats the required transaction data for review. Aggregate: Consolidates all records (items) into a one comprehensive list (item). Customization Modify the filter conditions and fields to suit your tracking needs. (for example you can higher or lower the number of time to filter or IN / OUT transactions - Default is 15m/IN) Adjust the schedule trigger frequency according to your preference (default is 15m). Best Practices Regularly test the workflow to ensure consistent performance. Stay updated with any changes to the structure of TronScan's public data that might affect the workflow. Contributing Your feedback and contributions are greatly appreciated. Feel free to adapt, modify, and share enhancements with the n8n community.
by Niklas Hatje
Use Case This workflow is a slight variation of a workflow we're using at n8n. In most companies, employees have a lot of great ideas. That was the same for us at n8n. We wanted to make it as easy as possible to allow everyone to add their ideas to some formatted database - it should be somewhere where everyone is all the time and could add a new idea without much extra effort. Since we're using Slack, this seemed to be the perfect place to easily add ideas. In this example, we're adding the ideas to Google Sheets instead of Notion, like we do. What this workflow does This workflow waits for a webhook call within Slack, that gets fired when users use the /idea command on a bot that you will create as part of this template. It then checks the command, adds the idea to Google Sheets and notifies the user about the newly added idea as you can see below: Creating your Slack bot Visit https://api.slack.com/apps, click on New App and choose a name and workspace. Click on OAuth & Permissions and scroll down to Scopes -> Bot token Scopes Add the chat:write scope Head over to Slash Commands and click on Create New Command Use /idea as the command Copy the test URL from the Webhook node into Request URL Add whatever feels best to the description and usage hint Go to Install app and click install Setup Create a Google Sheets document with the columns Name and Creator Add your Google credentials Fill the Set me up node. Create your Slack app (see other sticky) Click Test workflow and use the /idea comment in Slack Activate the workflow and exchange the Request URL with the production URL from the webhook How to adjust it to your needs You can adjust the table in Google Sheets and for example, add different types of ideas or areas that they impact Rename the Slack command as it works best for you How to enhance this workflow At n8n we use this workflow in combination with some others. E.g. we have the following things on top: We additionally have a /bug Slack command that adds a new bug to Linear. Here we're using AI to classify the bugs and move it to the right team. (Bug command workflow and Ai Classifier workflow) We also added other types, like /pain to be less solution-driven To make it easier for everyone to give input, we added a Votes column that allows everyone to vote on ideas/pain points in the list We're also running a workflow once a week that highlights the most popular new ideas and the most active voters
by Browser Use
A sample demo showing how to integrate Browser Use Cloud API with N8N workflows. This template demonstrates AI-powered web research automation by collecting competitor intelligence and delivering formatted results to Slack. How It Works Form trigger accepts competitor name input Browser Use Cloud API performs automated web research Webhook processes completion status and retrieves structured data JavaScript code formats results into readable Slack message HTTP request sends final report to Slack Integration Pattern This workflow showcases key cloud API integration techniques: REST API authentication with bearer tokens Webhook-based status monitoring for long-running tasks JSON data parsing and transformation Conditional logic for processing different response states Setup Required Browser Use API key (signup at cloud.browser-use.com) Slack webhook URL Perfect demo for learning browser-use cloud API integrations and building automated research workflows.
by Monospace Design
What is this workflow doing? This simple workflow is pulling the latest Euro foreign exchange reference rates from the European Central Bank and responding expected values to an incoming HTTP request (GET) via a Webhook trigger node. Setup no authentication** needed the workflow is ready to use test** the workflow template by hitting the test workflow button and calling the URL in the webhook node optional: choose your own Webhook listening path in the Webhook trigger node Usage There are two possible usage scenarios: get all Euro exchange rates as an array of objects get only a specific currency exchange rate as a single object All available rates Using the HTTP query ?foreign=USD (where USD is one of the available currency symbols) will provide only that specificly asked rate. Response example: {"currency":"USD","rate":"1.0852"} Single exchange rate If no query is provided, all available rates are returned. Response example: [{"currency":"USD","rate":"1.0852"},{"currency":"JPY","rate":"163.38"},{"currency":"BGN","rate":"1.9558"},{"currency":"CZK","rate":"25.367"},{"currency":"DKK","rate":"7.4542"},{"currency":"GBP","rate":"0.85495"},{"currency":"HUF","rate":"389.53"},{"currency":"PLN","rate":"4.3053"},{"currency":"RON","rate":"4.9722"},{"currency":"SEK","rate":"11.1675"},{"currency":"CHF","rate":"0.9546"},{"currency":"ISK","rate":"149.30"},{"currency":"NOK","rate":"11.4285"},{"currency":"TRY","rate":"33.7742"},{"currency":"AUD","rate":"1.6560"},{"currency":"BRL","rate":"5.4111"},{"currency":"CAD","rate":"1.4674"},{"currency":"CNY","rate":"7.8100"},{"currency":"HKD","rate":"8.4898"},{"currency":"IDR","rate":"16962.54"},{"currency":"ILS","rate":"3.9603"},{"currency":"INR","rate":"89.9375"},{"currency":"KRW","rate":"1444.46"},{"currency":"MXN","rate":"18.5473"},{"currency":"MYR","rate":"5.1840"},{"currency":"NZD","rate":"1.7560"},{"currency":"PHP","rate":"60.874"},{"currency":"SGD","rate":"1.4582"},{"currency":"THB","rate":"38.915"},{"currency":"ZAR","rate":"20.9499"}] Further info Read more about Euro foreign exchange reference rates here.
by ist00dent
This n8n template lets you instantly serve batches of inspirational quotes via a webhook using the free ZenQuotes API. It’s perfect for developers, content creators, community managers, or educators who want to add dynamic, uplifting content to websites, chatbots, or internal tools—without writing custom backend code. 🔧 How it works A Webhook node listens for incoming HTTP requests on your chosen path. Get Random Quote from ZenQuotes sends an HTTP Request to https://zenquotes.io/api/random?count=5 and retrieves five random quotes. Format data uses a Set node to combine each quote (q) and author (a) into a single string: "“quote” – author". Send response returns a JSON array of objects { quote, author } back to the caller. 👤 Who is it for? This workflow is ideal for: Developers building motivational Slack or Discord bots. Website owners adding on-demand quote widgets. Educators or trainers sharing daily inspiration via webhooks. Anyone learning webhook handling and API integration in n8n. 🗂️ Response Structure Your webhook response will be a JSON array, for example: [ { "quote": "Life is what happens when you're busy making other plans.", "author": "John Lennon" }, { "quote": "Be yourself; everyone else is already taken.", "author": "Oscar Wilde" } ] ⚙️ Setup Instructions Import the workflow JSON into your n8n instance. In the Webhook node, set your desired path (e.g., /inspire). (Optional) Change the count parameter in the HTTP Request node to fetch more or fewer quotes. Activate the workflow. Test by sending an HTTP GET or POST to https://<your-n8n-domain>/webhook/<path>.
by Mihai Farcas
This n8n workflow automates the process of saving web articles or links shared in a chat conversation directly into a Notion database, using Google's Gemini AI and Browserless for web scraping. Who is this AI automation template for? It's useful for anyone wanting to reduce manual copy-pasting and organize web findings seamlessly within Notion. A smarter web clipping tool! What this AI automation workflow does Starts when a message is received Uses a Google Gemini AI Agent node to understand the context and manage the subsequent steps. It identifies if a message contains a request to save an article/link. If a URL is detected, it utilizes a tool configured with the Browserless API (via the HTTP Request node) to scrape the content of the web page. Creates a new page in a specified Notion database, populating it with thea summary scraped content, in a specific format, never leaving out any important details. It also saves the original URL, smart tags, publication date, and other metadata extracted by the AI. Posts a confirmation message (e.g., to a Discord channel) indicating whether the article was saved successfully or if an error occurred. Setup Import Workflow: Import this template into your n8n instance. Configure Credentials & Notion Database: Notion Database: Create or designate a Notion database (like the example "Knowledge Database") where articles will be saved. Ensure this database has the following properties (fields): Name (Type: Text) - This will store the article title. URL (Type: URL) - This will store the original article link. Description (Type: Text) - This can store the AI-generated summary. Tags (Type: Multi-select) - Optional, for categorization. Publication Date (Type: Date) - *Optional, store the date the article was published. Ensure the n8n integration has access to this specific database. If you require a different format to the Notion Database, not that you will have to update the Notion tool configuration in this n8n workflow accordingly. Notion Credential: Obtain your Notion API key and add it as a Notion credential in n8n. Select this credential in the save_to_notion tool node. Configure save_to_notion Tool: In the save_to_notion tool node within the workflow, set the 'Database ID' field to the ID of the Notion database you prepared above. Map the workflow data (URL, AI summary, etc.) to the corresponding database properties (URL, Description, etc.). In the blocks section of the notion tool, you can define a custom format for the research page, allowing the AI to fill in the exact details you want extracted from any web page! Google Gemini AI: Obtain your API key from Google AI Studio or Google Cloud Console (if using Vertex AI) and add it as a credential. Select this credential in the "Tools Agent" node. Discord (or other notification service): If using Discord notifications, create a Webhook URL (instructions) or set up a Bot Token. Add the credential in n8n and select it in the discord_notification tool node. Configure the target Channel ID. Browserless/HTTP Request: Cloud: Obtain your API key from Browserless and configure the website_scraper HTTP Request tool node with the correct API endpoint and authentication header. Self-Hosted: Ensure your Browserless Docker container is running and accessible by n8n. Configure the website_scraper HTTP Request tool node with your self-hosted Browserless instance URL. Activate Workflow: Save test and activate the workflow. How to customize this workflow to your needs Change AI Model:** Experiment with different AI models supported by n8n (like OpenAI GPT models or Anthropic Claude) in the Agent node if Gemini 2.5 Pro doesn't fit your needs or budget, keeping in mind potential differences in context window size and processing capabilities for large content. Modify Notion Saving:** Adjust the save_to_notion tool node to map different data fields (e.g., change the summary style by modifying the AI prompt, add specific tags, or alter the page content structure) to your Notion database properties. Adjust Scraping:** Modify the prompt/instructions for the website_scraper tool or change the parameters sent to the Browserless API if you need different data extracted from the web pages. You could also swap Browserless for another scraping service/API accessible via the HTTP Request node.
by Hueston
Who is this for? Content strategists analyzing web page semantic content SEO professionals conducting entity-based analysis Data analysts extracting structured data from web pages Marketers researching competitor content strategies Researchers organizing and categorizing web content Anyone needing to automatically extract entities from web pages What problem is this workflow solving? Manually identifying and categorizing entities (people, organizations, locations, etc.) on web pages is time-consuming and error-prone. This workflow solves this challenge by: Automating the extraction of named entities from any web page Leveraging Google's powerful Natural Language API for accurate entity recognition Processing web pages through a simple webhook interface Providing structured entity data that can be used for analysis or further processing Eliminating hours of manual content analysis and categorization What this workflow does This workflow creates an automated pipeline between a webhook and Google's Natural Language API to: Receive a URL through a webhook endpoint Fetch the HTML content from the specified URL Clean and prepare the HTML for processing Submit the HTML to Google's Natural Language API for entity analysis Return the structured entity data through the webhook response Extract entities including people, organizations, locations, and more with their salience scores Setup Prerequisites: An n8n instance (cloud or self-hosted) Google Cloud Platform account with Natural Language API enabled Google API key with access to the Natural Language API Google Cloud Setup: Create a project in Google Cloud Platform Enable the Natural Language API for your project Create an API key with access to the Natural Language API Copy your API key for use in the workflow n8n Setup: Import the workflow JSON into your n8n instance Replace "YOUR-GOOGLE-API-KEY" in the "Google Entities" node with your actual API key Activate the workflow to enable the webhook endpoint Copy the webhook URL from the "Webhook" node for later use Testing: Use a tool like Postman or cURL to send a POST request to your webhook URL Include a JSON body with the URL you want to analyze: {"url": "https://example.com"} Verify that you receive a response containing the entity analysis data How to customize this workflow to your needs Analyzing Specific Entity Modify the "Google Entities" node parameters to include entityType filters Add a "Function" node after "Google Entities" to filter specific entity types Create conditions to extract only entities of interest (people, organizations, etc.) Processing Multiple URLs in Batch: Replace the webhook with a different trigger (HTTP Request, Google Sheets, etc.) Add a "Split In Batches" node to process multiple URLs Use a "Merge" node to combine results before sending the response Enhancing Entity Data: Add additional API calls to enrich extracted entities with more information Implement sentiment analysis alongside entity extraction Create a data transformation node to format entities by type or relevance Additional Notes This workflow respects Google's API rate limits by processing one URL at a time The Natural Language API may not identify all entities on a page, particularly for highly technical content HTML content is trimmed to 100,000 characters if longer to avoid API limitations Consider legal and privacy implications when analyzing and storing entity data from web pages You may want to adjust the HTML cleaning process for specific website structures ❤️ Hueston SEO Team
by Giacomo Lanzi
Extract Title tag and meta description from url for SEO analysis. How it works The workflows takes records from Airtable, get the url in the records and extract from the related webpage the title tag (<title>) and meta description (<meta name="description" content="Some content">). If title tag and/or meta description tag isn't available on the webpage, the result will be empty. Setup Set a Base in Airtable with a table with the following structure: url (field type url), title tag (field type text string), meta desc (field type text field) Minimum suggested table structure is: url (https://example.com), title (Title example), meta desc* (This is the meta description of the example page) Connect Airtable to both Airtable nodes in the template and, with the following formula, get all the records that miss title tag and meta desc. Formula: AND(url != "", {title tag} = "", {meta desc} = "") Insert the url to be analyzed in the table in the field url and let the workflow do the rest. Extra You can also calculate the length for title tag and meta desc using formula field inside Airtable. This is the formula: LEN({title tag}) or LEN({meta desc}) You can automate the process calling a Webhook from Airtable. For this, you need an Airtable paid plan.
by JaredCo
Real-time Weather Forecasts with MCP Tools This n8n workflow demonstrates how to integrate real-time weather intelligence into any automation using the Model Context Protocol (MCP). Get current conditions and 5-day forecasts with natural language queries like "What's the weather like in Miami?" or "Will it rain next Tuesday in Seattle?" - all powered by live weather data and AI. Good to know No API keys required - uses hosted MCP weather server with built-in WorldWeatherOnline integration Provides current conditions and detailed 5-day forecasts Natural language queries work for any location worldwide Powered by WorldWeatherOnline - the world's most accurate weather system Fully preconfigured and ready to run out-of-the-box Enterprise-ready with error handling and rate limiting How it works Natural Language Input**: Receives weather queries via webhook, chat, email, or voice AI Agent Processing**: n8n Agent node interprets requests and determines: Location extraction from natural language Weather data type needed (current or 5-day forecast) Response formatting preferences MCP Weather Tool**: Live hosted server provides: Real-time current conditions (temperature, humidity, wind, conditions) 5-day detailed forecasts with daily highs/lows Weather descriptions and condition codes Powered by WorldWeatherOnline's premium data Intelligent Responses**: AI formats weather data into: Conversational natural language responses Structured data for downstream automation Action-triggering data for workflows How to use Import the workflow into n8n from the template Add your preferred AI model API key to the Agent node Customize the system prompt for your specific use case Connect to your preferred input/output channels Run and start querying weather with natural language Use Cases Smart Home Automation**: "Turn on sprinklers if no rain forecast for 3 days" Travel Planning**: "Check weather for my Paris trip next week" Event Management**: "Will outdoor wedding conditions be good Saturday?" Agriculture/Farming**: "Check 5-day forecast for planting schedule" Logistics**: "Delay shipping if severe weather forecast in delivery zone" Personal Assistant**: "Should I wear a jacket today in Chicago?" Sports/Recreation**: "Surf conditions and wind forecast for weekend" Construction**: "Safe working conditions for outdoor project this week" Requirements n8n instance (cloud or self-hosted) AI model provider account (OpenAI, Anthropic, Google, etc.) Internet connection for MCP weather server access Optional: Webhook endpoints for external integrations Customizing this workflow Location Intelligence**: Add geocoding for address-to-coordinates conversion Data Storage**: Save weather history to databases for trend analysis Dashboard Integration**: Connect to Grafana, Tableau, or custom visualizations Voice Integration**: Add speech-to-text for voice weather queries Scheduling**: Set up automated daily/weekly weather briefings Conditional Logic**: Trigger different actions based on weather conditions Sample Input/Output Natural Language Queries: "What's the weather like in Miami?" "Will it rain next Tuesday in Seattle?" "5-day forecast for London" "Temperature in Tokyo tomorrow" "Weather conditions for outdoor event Saturday" Rich Responses: { "location": "Miami, FL", "current": { "temperature": "78°F", "condition": "Partly Cloudy", "humidity": "65%", "wind": "10 mph SE" }, "forecast": { "today": "High 82°F, Low 71°F, 20% rain", "tomorrow": "High 85°F, Low 73°F, Sunny" }, "ai_summary": "Perfect beach weather in Miami today! Partly cloudy with comfortable temperatures and light winds." } Why This Workflow is Unique Zero Setup Weather Data**: No API key management - MCP server handles everything World-Class Accuracy**: Powered by WorldWeatherOnline's premium weather data AI-Powered Intelligence**: Natural language understanding of complex weather queries Enterprise Ready**: Built-in error handling, rate limiting, and reliability Global Coverage**: Worldwide weather data with location intelligence Action-Oriented**: Designed for automation decisions, not just information display Transform your automations with intelligent weather awareness powered by the world's most accurate weather system! 🧪 Setup Steps ✅ The Agent node is already configured: The system prompt is included The tool endpoint is pre-set All you need to do is: Add your AI model API key to the existing Agent credential Hit run and you're done ✅ 🔗 Full project link: Github: weathertrax-mcp-agent-demo
by Mohan Gopal
🧩 Workflow: Process Tour PDF from Google Drive to Pinecone Vector DB with OpenAI Embeddings Overview This workflow automates the process of extracting tour information from PDF files stored in a Google Drive folder, processes and vectorizes the extracted data, and stores it in a Pinecone vector database for efficient querying. This is especially useful for building AI-powered search or recommendation systems for travel packages. Setup: Prerequisites A folder in Google Drive with PDF tour package brochures. Pinecone account + API key OpenAI API key n8n cloud or self-hosted instance Workflow Setup Steps Trigger Manual Trigger (When clicking 'Test workflow'): Used for manual testing and execution of the workflow. Google Drive Integration Step 1: Store Tour Packages in PDF Format Upload your curated tour packages containing the tours, activities and sight-seeings in PDF format into a designated Google Drive folder. Step 2: Search Folder Node: PDF Tour Package Folder (Google Drive) This node searches the designated folder for files (filter by MIME type = application/pdf if needed). Step 3: Download PDFs Node: Download Package Files (Google Drive) Downloads each matching PDF file found in the previous step. Process Each PDF File Step 4: Loop Through Files Node: Loop Over each PDF file Iterates through each downloaded PDF file to extract, clean, split, and embed. Data Preparation & Embedding Step 5: Data Loader Node: Data Loader Reads each PDF’s content using a compatible loader. It passes clean raw text to the next node. Often integrated with document loaders like pdf-loader, Unstructured, or pdfplumber. Step 6: Recursive Text Splitter Node: Recursive Character Text Splitter Splits large chunks of text into manageable segments using overlapping window logic (e.g., 500 tokens with 50 token overlap). This ensures contextual preservation for long documents during embedding. Step 7: Generate Embeddings Node: Embeddings OpenAI Uses text-embedding-3-small model to vectorize the split chunks. Outputs vector representations for each content chunk. Store in Pinecone Step 8: Pinecone Vector Store Node: Pinecone Vector Store - Store... Stores each embedding along with its metadata (source PDF name, chunk ID, etc.). This becomes the basis for fast, semantic search via RAG workflows or agents. 🛠️ Tools & Nodes Used Google Drive (Search & Download) Searches for all PDF files in a specified Google Drive folder. Downloads each file for processing. SplitInBatches (Loop Over Items) Loops through each file found in the folder, ensuring each is processed individually. Default Data Loader (LangChain) Reads and extracts text from the PDF files. Recursive Character Text Splitter (LangChain) Splits the extracted text into manageable chunks for embedding. OpenAI Embeddings (LangChain) Converts each text chunk into a vector using OpenAI’s embedding model. Pinecone Vector Store (LangChain) Stores the resulting vectors in a Pinecone index for fast similarity search and querying. 🔗 Workflow Steps Explained Trigger: The workflow starts manually for testing or can be scheduled. Google Drive Search: Finds all PDF files in the specified folder. Loop Over Files: Each file is processed one at a time using the SplitInBatches node. Download File: Downloads the current PDF file from Google Drive. Extract Text: The Default Data Loader node reads the PDF and extracts its text content. *Text Splitting: * The Recursive Character Text Splitter breaks the text into chunks (e.g., 1000 characters with 50 overlap) to optimize embedding quality. **Vectorization: **Each chunk is sent to the OpenAI Embeddings node to generate vector representations. Store in Pinecone: The vectors are inserted into a Pinecone index, making them available for semantic search and recommendations. 🚀 What Can Be Improved in the Next Version? *Error Handling: * Add error handling nodes to manage failed downloads or extraction issues gracefully. File Type Filtering: Ensure only PDF files are processed by adding a filter node. Metadata Storage: Store additional metadata (e.g., file name, tour ID) alongside vectors in Pinecone for richer search results. *Parallel Processing: * Optimize for large folders by processing multiple files in parallel (with care for API rate limits). Automated Triggers: Replace manual trigger with a time-based or webhook trigger for full automation. Data Validation: Add checks to ensure extracted text contains valid tour data before vectorization. User Feedback: Integrate notifications (e.g., email or Slack) to inform when processing is complete or if issues arise. 💡 Summary This workflow demonstrates how n8n can orchestrate a powerful AI data pipeline using Google Drive, LangChain, OpenAI, and Pinecone. It’s a great foundation for building intelligent search or recommendation features for travel and tour data. Feel free to ask for more details or share your improvements! Let me know if you want to see a specific part of the workflow or need help with a particular node!
by Oneclick AI Squad
This n8n template demonstrates how to create a comprehensive voice-powered restaurant assistant that handles table reservations, food orders, and restaurant information requests through natural language processing. The system uses VAPI for voice interaction and PostgreSQL for data management, making it perfect for restaurants looking to automate customer service with voice AI technology. Good to know Voice processing requires active VAPI subscription with per-minute billing Database operations are handled in real-time with immediate confirmations The system can handle multiple simultaneous voice requests All customer data is stored securely in PostgreSQL with proper indexing How it works Table Booking & Order Handling Workflow Voice requests are captured through VAPI triggers when customers make booking or ordering requests The system processes natural language commands and extracts relevant details (party size, time, food items) Customer data is immediately saved to the bookings and orders tables in PostgreSQL Voice confirmations are sent back through VAPI with booking details and estimated wait times All transactions are logged with timestamps for restaurant management tracking Restaurant Info Provider Workflow Info requests trigger when customers ask about hours, menu, location, or services Restaurant details are retrieved from the restaurant_info table containing current information Wait nodes ensure proper data loading before voice response generation Structured restaurant information is delivered via VAPI in natural, conversational format Database Schema Bookings Table booking_id (PRIMARY KEY) - Unique identifier for each reservation customer_name - Customer's full name phone_number - Contact number for confirmation party_size - Number of guests booking_date - Requested reservation date booking_time - Requested time slot special_requests - Dietary restrictions or special occasions status - Booking status (confirmed, pending, cancelled) created_at - Timestamp of booking creation Orders Table order_id (PRIMARY KEY) - Unique order identifier customer_name - Customer's name phone_number - Contact for order updates order_items - JSON array of food items and quantities total_amount - Calculated order total order_type - Delivery, pickup, or dine-in special_instructions - Cooking preferences or allergies status - Order status (received, preparing, ready, delivered) created_at - Order timestamp Restaurant_Info Table info_id (PRIMARY KEY) - Information entry identifier category - Type of info (hours, menu, location, contact) title - Information title description - Detailed information content is_active - Whether info is currently valid updated_at - Last modification timestamp How to use The manual trigger can be replaced with webhook triggers for integration with existing restaurant systems Import the workflow into your n8n instance and configure VAPI credentials Set up PostgreSQL database with the required tables using the schema provided above Configure restaurant information in the restaurant_info table Test voice commands such as "Book a table for 4 people at 7 PM" or "What are your opening hours?" Customize voice responses in VAPI nodes to match your restaurant's tone and branding The system can handle multiple concurrent voice requests and scales with your restaurant's needs Requirements VAPI account for voice processing and natural language understanding PostgreSQL database for storing booking, order, and restaurant information n8n instance with database and VAPI integrations enabled Customising this workflow Voice AI automation can be adapted for various restaurant types - from quick service to fine dining establishments Try popular use-cases such as multi-location booking management, dietary restriction handling, or integration with existing POS systems The workflow can be extended to include payment processing, SMS notifications, and third-party delivery platform integration