by Davide
The "Voice RAG Chatbot with ElevenLabs and OpenAI" workflow in n8n is designed to create an interactive voice-based chatbot system that leverages both text and voice inputs for providing information. Ideal for shops, commercial activities and restaurants How it works: Here's how it operates: Webhook Activation: The process begins when a user interacts with the voice agent set up on ElevenLabs, triggering a webhook in n8n. This webhook sends a question from the user to the AI Agent node. AI Agent Processing: Upon receiving the query, the AI Agent node processes the input using predefined prompts and tools. It extracts relevant information from the knowledge base stored within the Qdrant vector database. Knowledge Base Retrieval: The Vector Store Tool node interfaces with the Qdrant Vector Store to retrieve pertinent documents or data segments matching the user’s query. Text Generation: Using the retrieved information, the OpenAI Chat Model generates a coherent response tailored to the user’s question. Response Delivery: The generated response is sent back through another webhook to ElevenLabs, where it is converted into speech and delivered audibly to the user. Continuous Interaction: For ongoing conversations, the Window Buffer Memory ensures context retention by maintaining a history of interactions, enhancing the conversational flow. Set up steps: To configure this workflow effectively, follow these detailed setup instructions: ElevenLabs Agent Creation: Begin by creating an agent on ElevenLabs (e.g., named 'test_n8n'). Customize the first message and define the system prompt specific to your use case, such as portraying a character like a waiter at "Pizzeria da Michele". Add a Webhook tool labeled 'test_chatbot_elevenlabs' configured to receive questions via POST requests. Qdrant Collection Initialization: Utilize the HTTP Request nodes ('Create collection' and 'Refresh collection') to initialize and clear existing collections in Qdrant. Ensure you update placeholders QDRANTURL and COLLECTION accordingly. Document Vectorization: Use Google Drive integration to fetch documents from a designated folder. These documents are then downloaded and processed for embedding. Employ the Embeddings OpenAI node to generate embeddings for the downloaded files before storing them into Qdrant via the Qdrant Vector Store node. AI Agent Configuration: Define the system prompt for the AI Agent node which guides its behavior and responses based on the nature of queries expected (e.g., product details, troubleshooting tips). Link necessary models and tools including OpenAI language models and memory buffers to enhance interaction quality. Testing Workflow: Execute test runs of the entire workflow by clicking 'Test workflow' in n8n alongside initiating tests on the ElevenLabs side to confirm all components interact seamlessly. Monitor logs and outputs closely during testing phases to ensure accurate data flow between systems. Integration with Website: Finally, integrate the chatbot widget onto your business website replacing placeholder AGENT_ID with the actual identifier created earlier on ElevenLabs. By adhering to these comprehensive guidelines, users can successfully deploy a sophisticated voice-driven chatbot capable of delivering precise answers utilizing advanced retrieval-augmented generation techniques powered by OpenAI and ElevenLabs technologies.
by Ventsislav Minev
Google Drive Duplicate File Manager 🧹📁 Purpose: Automate the process of finding and managing duplicate files in your Google Drive. Who's it for? Individuals and teams aiming to streamline their Google Drive. Anyone tired of manual duplicate file cleanup. What it Solves: Saves storage space 💾. Reduces file confusion 😕➡️🙂. Automates tedious cleanup tasks 🤖. How it works: Trigger: Monitors a Google Drive folder for new files. Configuration: Sets rules for keeping and handling duplicates. Find Duplicates: Identifies duplicate files based on their content (MD5Checksum). Action: Either moves duplicates to trash or renames them. Setup Guide: Google Drive Trigger ⏰: Set up the trigger to watch a specific folder or your entire drive (use caution with the root folder! ⚠️). Configure the polling interval (default: every 15 minutes). Config Node ⚙️: keep: Choose whether to keep the "first" or "last" uploaded file (default: "last"). action: Select "trash" to delete duplicates or "flag" to rename them with "DUPLICATE-" (default: "flag"). owner & folder: Taken from the trigger. Only change if needed. Key Considerations: Google Drive API limits:** Be mindful of API usage. Folder Scope:* The workflow handles one folder depth by default. (WARNING: If configured to work with the Root folder / all files in all sub-directories are processed so *USE THIS OPTION WITH CAUTION** since the workflow might trash/rename important files) Google Apps:** Google docs are ignored since they are not actual binary-files and their content can't be compared. Enjoy your clean Google Drive! ✨
by Hubschrauber
Fetches workflow definitions from within n8n, selecting only the ones that have one or more (configurable) assigned tags and then: Derives a suitable backup filename by reducing the workflow name to a string with alphanumeric characters and no-spaces Note: This isn't bulletproof, but works as long as workflow names aren't too crazy. Determines which workflows need to be backed up based on whether each one: has been modified. (Note: Even repositioning a node counts.) ...or... is new. (Note: Renaming counts as this.) Commits JSON copies of each workflow, as necessary, to a Gitlab repository with a generated, date-stamped commit message. Setup Credentials Create a Gitlab Credentials item and assign it to all Gitlab nodes. Create an n8n Credentials item and assign it to the n8n node Note: This was tested with http://localhost:5678/api/v1 but should work with any reachable n8n instance and API key. Modify these values in the "Globals" Node gitlab_owner - {{your gitlab account}} gitlab_project - {{ your gitlab project name }} gitlab_workflow_path - {{ subdirectory in the project where backup files should be saved/committed }} tags_to_match_for_backup - {{tag(s) to match for backup selection}} *ALERT: According to the n8n node's Filters -> tags field annotations, and API documentation, this supports a CSV list of multiple tags (e.g. tag1,tag2), but the API behavior requires workflows to have all-of the listed tags, not any-of them.* See: https://github.com/n8n-io/n8n/issues/10348 TL/DR - Don't expect a multiple tag list to be more inclusive. Possible workaround: To match more than one tag value, duplicate the n8n node into multiple single-tag matches, or split and iterate multiple values, and merge the results. Possible Enhancements Make the branch ("Reference") for all the gitlab nodes configurable. Fixed on all as "main" in the template. Add an n8n node to generate an audit and store the output in gitlab along with the backups. Extend the workflow at the end to create a Gitlab release/tag whenever any backup files are actually updated or created.
by Sk developer
🎥 Bulk TikTok Video Download Without Watermark to Google Drive This workflow automates the process of downloading TikTok videos and uploading them to Google Drive. It reads TikTok URLs from a Google Sheet, downloads the video using the TikTok Video Downloader — a tool for downloading TikTok videos without watermark in HD quality — uploads it to Drive, makes it public, and updates the same sheet with the Drive link. 🔧 What It Does ✅ Manually triggered when ready to run. 📄 Reads TikTok URLs from a Google Sheet. 🔁 Loops through each URL one at a time. 🌐 Fetches video download links using the TikTok Video Downloader — a reliable TikTok video downloader without watermark. ⬇️ Downloads each video in high-definition (HD) format using the direct media link. ☁️ Uploads the video to Google Drive. 🔓 Sets public sharing permission for the video. ✏️ Updates the original Google Sheet with the public Drive URL. 📋 Google Sheet Example Make sure your sheet has at least these columns: | url | drive_link (to be auto-filled) | |-------------------------------------|--------------------------------| | https://www.tiktok.com/@user1... | (blank initially) | | https://www.tiktok.com/@user2... | (blank initially) | > The workflow reads from url and fills in drive_link after upload. 🧩 Nodes Used | Node Name | Type | Purpose | |------------------------------|-------------------|-------------------------------------------------------| | When clicking ‘Execute’ | Manual Trigger | Starts the workflow manually | | Get Data From Google Sheets | Google Sheets | Fetches rows (TikTok URLs) | | Loop Over Items | Split In Batches | Iterates over each row | | Call TikTok Downloader | HTTP Request | Gets video download link from TikTok Video Downloader | | Wait | Wait | Optional delay to prevent overload | | Download File | HTTP Request | Downloads HD video using media link | | Upload File In Google Drive | Google Drive | Uploads the video to Google Drive | | Set Public Permission | Google Drive | Makes the uploaded file publicly accessible | | Update Row In Google Sheet | Google Sheets | Adds Drive link to the same row | | Sleep | Wait | Small delay between each iteration | 📝 Requirements ✅ Google API credentials (Service Account) with access to: Google Sheets Google Drive 🔐 RapidAPI Key for TikTok Video Downloader – a TikTok video downloader without watermark (HD supported) 🗂 A Google Sheet with a url column containing TikTok video URLs 🧩 Challenges Solved | ❗ Challenge | ✅ Solution | |-------------|-------------| | TikTok video URLs often have watermarks and low quality | Used TikTok Video Downloader API for HD + no watermark download links | | No easy way to bulk download and organize TikToks | Automated fetching, downloading, and uploading using n8n + Google Drive | | Manual video saving and re-uploading to Drive is time-consuming | Eliminated all manual steps with a fully automated workflow | | Tracking which videos are already processed | Automatically updates the Google Sheet row with the final Drive link | | Drive files are private by default | Automatically sets public sharing permission on uploaded videos | | Risk of API rate limits or throttling | Added Wait nodes and batch processing to avoid overload | 🎁 Benefits | 🌟 Benefit | 💬 Description | |------------|----------------| | 🚀 Saves Time | Fully automates a previously manual workflow | | 🎥 High Quality Content | Videos downloaded are HD + watermark-free — ready for reuse or archives | | 🔁 Reusable Setup | Can process unlimited TikTok URLs via the Google Sheet | | 📊 Organized Output | Keeps track of source URL and uploaded Drive link in a single sheet | | 🔐 Secure but Shareable | Drive links are auto-shared publicly while remaining under your control | | 🔄 Scalable | Can be run daily, weekly, or triggered by new rows — completely scalable | | 💸 Cost-Effective | No need for paid tools or manual freelancers — runs on n8n + free APIs | 💡 Use Cases Content curation from TikTok Archiving user-submitted TikToks Automating social-to-cloud workflows Bulk migration of video content Saving TikTok videos in HD without watermark for sharing or archiving 📌 Tips Replace manual trigger with Cron for full automation. Use the TikTok Video Downloader responsibly — check API limits. Store metadata (e.g., uploader, hashtags) in additional Google Sheet columns. This tool helps ensure you're always downloading high-quality TikTok videos without watermark.
by Jimleuk
This n8n template shows you how to create an MCP server out of your existing n8n workflows. With this, any MCP client connected can get more done with powerful end-to-end workflows rather than just simple tools. Designing agent tools for outcome rather than utility has been a long recommended practice of mine and it applies well when it comes to building MCP servers; In gist, agents to be making the least amount of calls possible to complete a task. This is why n8n can be a great fit for MCP servers! This template connects your agent/MCP client (like Claude Desktop) to your existing workflows by allowing the AI to discover, manage and run these workflows indirectly. How it works An MCP trigger is used and attaches 4 custom workflow tools to discover and manage existing workflows to use and 1 custom workflow tool to execute them. We'll introduce an idea of "available" workflows which the agent is allowed to use. This will help limit and avoid some issues when trying to use every workflow such as clashes or non-production. The n8n node is a core node which taps into your n8n instance API and is able to retrieve all workflows or filter by tag. For our example, we've tagged the workflows we want to use with "mcp" and these are exposed through the tool "search workflows". Redis is used as our main memory for keeping track of which workflows are "available". The tools we have are "add Workflow", "remove workflow" and "list workflows". The agent should be able to manage this autonomously. Our approach to allow the agent to execute workflows is to use the Subworkflow trigger. The tricky part is figuring out the input schema for each but was eventually solved by pulling this information out of the workflow's template JSON and adding it as part of the "available" workflow's description. To pass parameters through the Subworkflow trigger, we can do so via the passthrough method - which is that incoming data is used when parameters are not explicitly set within the node. When running, the agent will not see the "available" workflows immediately but will need to discover them via "list" and "search". The human will need to make the agent aware that these workflows will be preferred when answering queries or completing tasks. How to use First, decide which workflows will be made visible to the MCP server. This example uses the tag of "mcp" but you can all workflows or filter in other ways. Next, ensure these workflows have Subworkflow triggers with input schema set. This is how the MCP server will run them. Set the MCP server to "active" which turns on production mode and makes available to production URL. Use this production URL in your MCP client. For Claude Desktop, see the instructions here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop. There is a small learning curve which will shape how you communicate with this MCP server so be patient and test. The MCP server will work better if there is a focused goal in mind ie. Research and report, rather than just a collection of unrelated tools. Requirements N8N API key to filter for selected workflows. N8N workflows with Subworkflow triggers! Redis for memory and tracking the "available" workflows. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow If your targeted workflows do not use the subworkflow trigger, it is possible to amend the executeTool to use HTTP requests for webhooks. Managing available workflows helps if you have many workflows where some may be too similar for the agent. If this isn't a problem for you however, feel free to remove the concept of "available" and let the agent discover and use all workflows!
by Leonardo Grigorio
Want to see it in action? Watch the full breakdown here: 📺 Video Link Template Description This n8n workflow empowers you to query structured financial data from Google Sheets or CSV files using AI-generated SQL. Unlike traditional vector database solutions that falter with numerical queries, this template leverages PostgreSQL for efficient data storage and an AI agent to dynamically create optimized SQL queries from natural language inputs. What It Does Retrieves data from Google Sheets or CSV files Infers the data schema and builds a PostgreSQL table Populates the table with your data Uses an AI agent to translate natural language questions into SQL queries Returns precise numerical results quickly and efficiently Why Use This? No SQL knowledge required—the AI generates queries for you Bypasses the inefficiencies and costs of vector database approaches Scales effortlessly without overwhelming the language model Fully free and open-source Setup Requirements Pre-Conditions PostgreSQL Database**: A running PostgreSQL instance (no specific extensions required beyond standard installation). Google Sheets Access**: A publicly accessible or shared Google Sheet URL with structured data (e.g., financial records). Need a starting point? Use this Sample Google Sheet Template. n8n Instance**: A working n8n setup with access to the Google Drive and PostgreSQL nodes. Step-by-Step Instructions Add Your Google Sheets URL Open the "Google Drive Trigger" node. Replace the placeholder URL with your Google Sheet’s link. Verify the sheet name matches your data source. Configure PostgreSQL Update the "PostgreSQL" nodes with your database credentials (host, database, user, password). The workflow automatically creates and populates the table based on your data schema. Run the Workflow Execute the workflow manually to set up the database. Once initialized, use the AI agent by asking questions like: "How much did I sell last week?" "What were the total sales for Product X in February?" (Optional) Automate Updates Add a "Schedule Trigger" node to sync your Google Sheets data with PostgreSQL on a regular basis. How It Works Schema Detection**: The workflow analyzes your Google Sheets or CSV data to infer its structure and create an appropriate PostgreSQL table. AI-Powered Queries**: An optimized AI agent converts your natural language questions into precise SQL queries, ensuring accurate results. Efficient Retrieval**: By using PostgreSQL instead of vector-based methods, this template avoids common pitfalls like slow performance or inaccurate numerical outputs. Tips for Success Ensure your Google Sheet or CSV has consistent column headers for smooth schema detection. Test with simple questions first to verify the AI agent’s query generation. Check out the n8n Template Submission Guidelines for more best practices.
by Custom Workflows AI
Introduction The Content SEO Audit Workflow is a powerful automated solution that generates comprehensive SEO audit reports for websites. By combining the crawling capabilities of DataForSEO with the search performance metrics from Google Search Console, this workflow delivers actionable insights into content quality, technical SEO issues, and performance optimization opportunities. The workflow crawls up to 1,000 pages of a website, analyzes various SEO factors including metadata, content quality, internal linking, and search performance, and then generates a professional, branded HTML report that can be shared directly with clients. The entire process is automated, transforming what would typically be hours of manual analysis into a streamlined workflow that produces consistent, thorough results. This workflow bridges the gap between technical SEO auditing and practical, client-ready deliverables, making it an invaluable tool for SEO professionals and digital marketing agencies. Who is this for? This workflow is designed for SEO consultants, digital marketing agencies, and content strategists who need to perform comprehensive content audits for clients or their own websites. It's particularly valuable for professionals who: Regularly conduct SEO audits as part of their service offerings Need to provide branded, professional reports to clients Want to automate the time-consuming process of content analysis Require data-driven insights to inform content strategy decisions Users should have basic familiarity with SEO concepts and metrics, as well as a basic understanding of how to set up API credentials in n8n. While no coding knowledge is required to run the workflow, users should be comfortable with configuring workflow parameters and following setup instructions. What problem is this workflow solving? Content audits are essential for SEO strategy but are traditionally labor-intensive and time-consuming. This workflow addresses several key challenges: Manual Data Collection: Gathering data from multiple sources (crawlers, Google Search Console, etc.) typically requires hours of work. This workflow automates the entire data collection process. Inconsistent Analysis: Manual audits can suffer from inconsistency in methodology. This workflow applies the same comprehensive analysis criteria to every page, ensuring thorough and consistent results. Report Generation: Creating professional, client-ready reports often requires additional design work after the analysis is complete. This workflow generates a fully branded HTML report automatically. Data Integration: Correlating technical SEO issues with actual search performance metrics is difficult when working with separate tools. This workflow seamlessly integrates crawl data with Google Search Console metrics. Scale Limitations: Manual audits become increasingly difficult with larger websites. This workflow can efficiently process up to 1,000 pages without additional effort. What this workflow does Overview The Content SEO Audit Workflow crawls a specified website, analyzes its content for various SEO issues, retrieves performance data from Google Search Console, and generates a comprehensive HTML report. The workflow identifies issues in five key categories: status issues (404 errors, redirects), content quality (thin content, readability), metadata SEO (title/description issues), internal linking (orphan pages, excessive click depth), and performance (underperforming content). The final report includes executive summaries, detailed issue breakdowns, and actionable recommendations, all branded with your company's colors and logo. Process Initial Configuration: The workflow begins by setting parameters including the target domain, crawl limits, company information, and branding colors. Website Crawling: The workflow creates a crawl task in DataForSEO and periodically checks its status until completion. Data Collection: Once crawling is complete, the workflow: Retrieves the raw audit data from DataForSEO Extracts all URLs with status code 200 (successful pages) Queries Google Search Console API for each URL to get clicks and impressions data Identifies 404 and 301 pages and retrieves their source links Data Analysis: The workflow analyzes the collected data to identify issues including: Technical issues: 404 errors, redirects, canonicalization problems Content issues: thin content, outdated content, readability problems SEO metadata issues: missing/duplicate titles and descriptions, H1 problems Internal linking issues: orphan pages, excessive click depth, low internal links Performance issues: underperforming pages based on GSC data Report Generation: Finally, the workflow: Calculates a health score based on the severity and quantity of issues Generates prioritized recommendations Creates a comprehensive HTML report with interactive tables and visualizations Customizes the report with your company's branding Provides the report as a downloadable HTML file Setup To set up this workflow, follow these steps: Import the workflow: Download the JSON file and import it into your n8n instance. Configure DataForSEO credentials: Create a DataForSEO account at https://app.dataforseo.com/api-access (they offer a free $1 credit for testing) Add a new "Basic Auth" credential in n8n following the HTTP Request Authentication guide Assign this credential to the "Create Task", "Check Task Status", "Get Raw Audit Data", and "Get Source URLs Data" nodes Configure Google Search Console credentials: Add a new "Google OAuth2 API" credential following the Google OAuth guide Ensure your Google account has access to the Google Search Console property you want to analyze Assign this credential to the "Query GSC API" node Update the "Set Fields" node with: dfs_domain: The website domain you want to audit dfs_max_crawl_pages: Maximum number of pages to crawl (default: 1000) dfs_enable_javascript: Whether to enable JavaScript rendering (default: false) company_name: Your company name for the report branding company_website: Your company website URL company_logo_url: URL to your company logo brand_primary_color: Your primary brand color (hex code) brand_secondary_color: Your secondary brand color (hex code) gsc_property_type: Set to "domain" or "url" depending on your Google Search Console property type Run the workflow: Click "Start" and wait for it to complete (approximately 20 minutes for 500 pages). Download the report: Once complete, download the HTML file from the "Download Report" node. How to customize this workflow to your needs This workflow can be adapted in several ways to better suit your specific requirements: Adjust crawl parameters: Modify the "Set Fields" node to change: The maximum number of pages to crawl (dfs_max_crawl_pages). This workflow supports up to 1000 pages. Whether to enable JavaScript rendering for JavaScript-heavy sites (dfs_enable_javascript) Customize issue detection thresholds: In the "Build Report Structure" code node, you can modify: Word count thresholds for thin content detection (currently 1500 words) Click depth thresholds (currently flags pages deeper than 4 clicks) Title and description length parameters (currently 40-60 chars for titles, 70-155 for descriptions) Readability score thresholds (currently flags Flesch-Kincaid scores below 55) Modify the report design: In the "Generate HTML Report" code node, you can: Adjust the HTML/CSS to change the report layout and styling Add or remove sections from the report Change the recommendations logic Modify the health score calculation algorithm Add additional data sources: You could extend the workflow by: Adding Pagespeed Insights data for performance metrics Incorporating backlink data from other APIs Adding keyword ranking data from rank tracking APIs Implement automated delivery: Add nodes after the "Download Report" to: Send the report directly to clients via email Upload it to cloud storage Create a PDF version of the report
by Ranjan Dailata
Who this is for? The Brand Content Extract, Summarization & Sentiment Analysis workflow is designed for professionals and teams who need to monitor, understand, and act on public brand perception at scale. It is ideal for: Brand Managers - Looking to track how their brand is portrayed online. Marketing Analysts - Seeking insights from competitor and industry content. PR & Communications Teams - Evaluating media tone and potential reputation risks. Data Scientists & AI Developers - Automating content intelligence pipelines. Growth Hackers - Performing large-scale web listening for campaign optimization. What problem is this workflow solving? Manually tracking and interpreting how your brand is mentioned across blogs, news sites, or product reviews is labor-intensive and unscalable. Traditional scraping tools return raw data but lack insights like summarization, sentiment analysis etc. This workflow addresses: Scalable extraction of brand-related content using Bright Data's infrastructure. Textual data extract for easy decision-making or alerting. Automated summarization of verbose or multi-paragraph articles using Gemini. Sentiment analysis of how a brand is being portrayed. What this workflow does Receives input: A brand URL for the data extraction and analysis. Uses Bright Data's Web Unlocker to extract content from relevant sites. Cleans and preprocesses the scraped content for readability. Sends the content to Google Gemini for: Enriched results including: Cleaned content Summary Sentiment Analysis Sends the response to a target system via Webhook notification Perists the response to disk Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Set URL and Bright Data Zone for setting the brand content URL and the Bright Data Zone name. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Update Source** : Update the workflow input to read from Google Sheet or Airbase for dynamically tracking multiple brands or topics. AI Prompt Customization** : Tailor Gemini prompts for: Summary length (brief vs. detailed) Detailed Sentiment with the custom structured data format. Brand-specific tone detection (e.g., trust, excitement, dissatisfaction) Output Destinations**: Configure the output node to send the responses to various platforms, such as Slack, CRM systems, or databases.
by Oneclick AI Squad
In this guide, we’ll walk you through setting up a smart workflow that triggers on new restaurant orders, extracts and formats customer and dish details from Google Sheets, uses Gemini AI to recommend dishes or offers, and sends suggestions via Telegram. Ready to automate your order processing and enhance customer experience? Let’s dive in! What’s the Goal? Automatically trigger the workflow when a new order is placed. Extract and format customer information and order details from Google Sheets. Use Gemini AI to analyze orders and recommend dishes or offers. Send personalized suggestions to customers via Telegram. Enable real-time order processing and customer engagement. By the end, you’ll have a smart system that processes orders and suggests items effortlessly. Why Does It Matter? Manual order processing and suggestion generation are inefficient and miss opportunities. Here’s why this workflow is a game changer: Real-Time Efficiency**: Instantly process orders and suggest items. Personalized Engagement**: AI-driven suggestions enhance customer satisfaction. Time-Saving Automation**: Reduce manual effort in order management. Improved Sales**: Targeted recommendations can boost order value. Think of it as your intelligent assistant for orders and customer delight. How It Works Here’s the step-by-step magic behind the automation: Step 1: New Order Trigger Trigger the workflow when a new order is detected (e.g., via a form submission). Step 2: Extract & Format Order Extract and format dish ordering details from the customer order details sheet for further processing. Step 3: Save Customer Info Save customer information (e.g., ID, name, mobile number) from the customer details sheet. Step 4: Save Dish Info Save dish details (e.g., name, quantity, price) from the customer order details sheet. Step 5: Prepare Dish Details for AI Prepare the dish details for AI analysis to generate recommendations. Step 6: Clean Data for Input to Improve AI Understanding Clean and structure the data to enhance AI comprehension. Step 7: Use Gemini AI to Recommend Dishes or Offers Utilize Gemini AI (via Google Chat Model and Think Tool) to recommend dishes or offers based on order data. Step 8: Format AI Suggestions Format the AI-generated suggestions into a Telegram-friendly message. Step 9: Send Suggestions via Telegram Send the formatted suggestions directly to the customer via Telegram. How to Use the Workflow? Importing a workflow in n8n is a straightforward process that allows you to use pre-built workflows to save time. Below is a step-by-step guide to importing the Smart Restaurant Order & Suggestion System workflow in n8n. Steps to Import a Workflow in n8n Obtain the Workflow JSON Source the Workflow: Workflows are shared as JSON files or code snippets, e.g., from the n8n community, a colleague, or exported from another n8n instance. Format: Ensure you have the workflow in JSON format, either as a file (e.g., workflow.json) or copied text. Access the n8n Workflow Editor Log in to n8n (via n8n Cloud or self-hosted instance). Navigate to the Workflows tab in the n8n dashboard. Click Add Workflow to create a blank workflow. Import the Workflow Option 1: Import via JSON Code (Clipboard): Click the three dots (⋯) in the top-right corner to open the menu. Select Import from Clipboard. Paste the JSON code into the text box. Click Import to load the workflow. Option 2: Import via JSON File: Click the three dots (⋯) in the top-right corner. Select Import from File. Choose the .json file from your computer. Click Open to import. Setup Notes Google Sheet Columns**: Customer Details Sheet: Customer id, Customer name, Customer mobile number (e.g., CUST-JW4Z8Y, ajay, 9898989898; CUST-VEITPW, akash, 9898976898). Customer Order Details Sheet: Customer id, Dish name, Dish quantity, Per unit price, Actual price (e.g., CUST-JW4Z8Y, Tandoori Chicken, 1, 250, 250; CUST-VEITPW, Masala Dosa, 1, 150, 150). Google Sheets Credentials**: Configure OAuth2 settings in the extract and save nodes with your Google Sheet ID and credentials. Gemini AI**: Set up the Gemini AI node with Google Chat Model and Think Tool credentials. Telegram Integration**: Authorize the Send Suggestions node with Telegram API credentials and the customer’s chat ID or mobile number. Trigger Setup**: Configure the New Order Trigger node to detect new orders (e.g., via form or webhook).
by Belgacem Dhiflaoui
What Problem Does This Solve? This workflow automates the end-to-end process of capturing company information from Google Drive, storing it semantically in Pinecone, and interacting with users via an intelligent AI chatbot. It eliminates the need for manual customer service, lead tracking, and company information retrieval—offering a fully automated, intelligent engagement system. Perfect for teams that need to: Maintain accurate, AI-readable company knowledge bases Answer customer inquiries 24/7 using AI Automatically collect and log lead information Embed a chatbot into their website to assist potential customers Target Audience: Sales teams, business owners, marketing departments, customer support reps, startup founders, or anyone looking to automate AI-powered lead generation and customer engagement. What Does It Do? Part One – Knowledge Ingestion Monitors** a Google Drive folder for new .txt or document uploads. Downloads** the document and splits the content into manageable chunks using a recursive character splitter. Generates** embeddings via OpenAI. Stores** the embeddings in a Pinecone vector database under the Q&A namespace. Purpose:** This knowledge base is later used to answer business-related questions through AI. Part Two – AI Chatbot Engagement Listens** for incoming chat messages using n8n’s chatTrigger node. Activates an AI agent** (powered by GPT-4o) to respond to inquiries regarding business hours, services, products, or general company info. Retrieves knowledge** using a vector search tool connected to Pinecone (newCompany_q). Captures leads:** If a user shows interest, the AI collects and stores: Name Email Phone number Specific interest into a connected Google Sheet automatically. Key Features 🔄 Google Drive integration for real-time file processing 🧠 OpenAI embedding + Pinecone vector store for semantic memory 🤖 LangChain agent with tool-based reasoning 🗃️ Google Sheets integration for dynamic lead storage 💬 GPT-4o model for accurate, human-like conversation ⚙️ Modular design to expand into CRM, Notion, or email workflows 🌐 Website-ready chatbot endpoint 🧰 Setup Instructions Prerequisites: n8n instance (cloud or self-hosted) Google Drive account (for uploading company data) Pinecone account (for vector storage) OpenAI API key Google Sheets access with OAuth2 credentials 📦 Installation Steps 1. Import the Workflow Upload the JSON files into your n8n instance. 2. Configure Credentials In n8n > Credentials, connect: Google Drive OpenAI Pinecone Google Sheets **3. Set Pinecone Index & Namespace Example:** Index: comanyName Namespace: Q&A 4. Test the Flow Upload a sample .txt or pdf file to the monitored Drive folder. Send a message to the chatbot (e.g., "What are your opening hours?"). Check the Google Sheet for collected user info. How It Works (Behind the Scenes) Part 1 – Data Preparation: Company files are uploaded to Google Drive. File is detected, downloaded, and chunked. Embeddings are created using OpenAI. Data is stored in Pinecone for semantic retrieval. Part 2 – Chat Interaction: A chat message triggers the workflow via webhook. The AI agent interprets the intent and accesses company data via newCompany_q. If lead data is gathered, it is appended to a Google Sheet using the AI-parsed values. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Saswat Saubhagya Rout
📝 Use Case This n8n workflow automates the creation and publication of technical blog posts based on a list of topics stored in Google Sheets. It fetches context using Tavily and Wikipedia, generates Markdown-formatted content with Gemini AI, commits it to a GitHub repository, and updates a Jekyll-powered blog — all without manual intervention. Ideal for developers, bloggers, or content teams who want to streamline technical content creation and publishing. ⚙️ Setup Instructions 🔑 Prerequisites n8n (cloud or self-hosted) Tavily API key Google Sheets with blog topics Gemini (Google Palm) API key GitHub repository (Jekyll enabled) GitHub OAuth2 credentials Google OAuth2 credentials 🧩 Setup Steps Import the workflow JSON into your n8n instance. Set up the following credentials in n8n: Tavily API Google Sheets OAuth2 Google Palm/Gemini AI GitHub OAuth2 Prepare your Google Sheet: Columns: Title, status, row_number Set status to blank for topics to be picked up. Configure: GitHub repo and _posts/ path Jekyll setup (front matter, _config.yml, GitHub Pages) Adjust prompt/custom parameters if needed. Enable and deploy the workflow. Schedule it daily or trigger manually. 🔄 Workflow Details | Node | Function | |------|----------| | Schedule Trigger | Triggers the flow at a set interval | | Google Sheets (Get Topic) | Fetches the next incomplete blog topic | | Extract Topic | Parses topic text from the sheet | | Tavily Search | Gathers up-to-date content related to the topic | | Wikipedia Tool | Optionally adds more context or images | | Summarize Results | Formats the context for the AI | | Gemini AI Agent (LangChain) | Generates a Markdown blog post with YAML front matter | | Set File Parameters | Prepares the filename, content, and commit message | | GitHub Commit | Uploads the .md file to the _posts/ directory | | Update Google Sheet | Marks topic as done after successful commit | 🛠️ Customization Options Change LLM prompt (e.g. tone, depth, format). Use OpenAI instead of Gemini by switching nodes. Modify filename pattern or GitHub repo path. Add Slack/Discord notifications after publish. Extend flow to upload images or embed YouTube links. ⚠️ Community Nodes Used This workflow uses the following community nodes: @tavily/n8n-nodes-tavily.tavily – for deep search > ⚠️ Ensure these are installed and enabled in your n8n instance. 💡 Pro Tips Use GitHub Actions to trigger an automatic Jekyll build post-commit. Structure blog posts with front matter, headings, and table of contents for SEO. Set Schedule Trigger to daily at a fixed time to keep content flowing. Enhance formatting in AI output using code blocks, images, and lists. ✅ Example Output title: "How LLMs Are Changing Web Development" date: "2025-07-25" categories: [webdev, AI] tags: [LLM, Gemini, n8n, automation] excerpt: "Learn how LLMs like Gemini are transforming how we generate and deploy developer content." author: "Saswat Saubhagya" Table of Contents Introduction Understanding LLMs Use Cases in Web Development Challenges Conclusion ...
by Joseph
Here is your refined template description with detailed step-by-step instructions, markdown formatting, and customization guidance. YouTube Transcript Extraction Workflow This n8n workflow extracts and processes transcripts from YouTube videos using the YouTube Transcript API on RapidAPI. It allows users to retrieve subtitles from YouTube videos, clean them up, and return structured transcript data for further processing. Table of Contents Problem Statement & Target Audience Pre-conditions & API Requirements Step-by-Step Workflow Explanation Customization Guide How to Set Up This Workflow Problem Statement & Target Audience Who is this for? This workflow is ideal for content creators, researchers, and developers who need to: Extract subtitles from YouTube videos automatically. Format and clean** transcript data for readability. Use transcripts for summarization, content repurposing, or language analysis. Pre-conditions & API Requirements API Required YouTube Transcript API** (RapidAPI) n8n Setup Prerequisites A running n8n instance (Installation Guide) A RapidAPI account to access the YouTube Transcript API An API key from RapidAPI to authenticate requests Step-by-Step Workflow Explanation 1. Input YouTube Video URL (Trigger) This step provides a simple input form where users enter a YouTube video URL. 2. HTTP Request Node (Retrieve Transcript Data) Makes a POST request to the YouTube Transcript API via RapidAPI. Passes the video URL received from the input form. Uses an environment variable to store the API key securely. 3. Function Node (Process Transcript) Receives* the API response containing the *raw transcript**. Processes and cleans** the transcript: Removes unwanted characters. Formats text for readability. Handles errors** when no transcript is available. Outputs* both the *raw and cleaned transcript** for further use. 4. Set Field Node (Response Formatting) Structures** the processed transcript data into a user-friendly format. Returns** the final transcript data to the client. Customization Guide 1. Modify Transcript Cleaning Rules Update the Function Node to apply custom text processing, such as: Removing timestamps. Changing the output format (e.g., JSON, plain text). 2. Store Transcripts in a Database Add a Database Node (e.g., MySQL, PostgreSQL, or Firebase) to save transcripts. 3. Generate Summaries from Transcripts Integrate AI services (e.g., OpenAI, Google Gemini) to summarize transcripts. 4. Convert Transcripts into Speech Use ElevenLabs API to generate an AI-powered voiceover from transcripts. How to Set Up This Workflow Step 1: Import the Workflow into n8n Download or copy the workflow JSON file. Import it into your n8n instance. Step 2: Set Up the API Key Sign up for the YouTube Transcript API. Subscribe to the api. Copy and paste your api key where the "your_api_key" is. Step 3: Activate the Workflow Start the workflow in n8n. Enter a YouTube video URL in the input form. The workflow will return a cleaned transcript. This workflow ensures seamless YouTube transcript extraction and processing with minimal manual effort. 🚀