by Mario
Purpose This workflow creates a versioned backup of an entire Clockify workspace split up into monthly reports. How it works This backup routine runs daily by default The Clockify reports API endpoint is used to get all data from the workspace based on time entries A report file is being retrieved for every month starting with the current one, going back 3 month in total by default If changes happened during a day to any report, it is being updated in Github Prerequisites Create a private Github repository Create credentials for both Clockify and Github (make sure to give permissions for read and write operations) Setup Clone the workflow and select the belonging credentials Follow the instructions given in the yellow sticky notes Activate the workflow
by AlexAy
Who is this workflow template for? This workflow template is perfect for freelancers, small business owners, accounting teams, or anyone responsible for managing and recording invoices regularly. If you deal with multiple invoices and spend considerable time manually entering invoice data into a database, this automation will significantly simplify your daily operations and reduce potential errors. What this workflow does The workflow automates the entire invoice logging process. It continuously monitors a designated Google Drive folder every minute for new PDF invoice uploads. Once a new invoice is detected, it is automatically converted from PDF to an image format using the ILovePDF API. After conversion, Google's Gemini AI analyzes the image, intelligently extracting essential details such as vendor name, item description, invoice amount, invoice date, payment date, and bank reference numbers. Finally, this structured data is automatically recorded in an Airtable database (or optionally in a Google Sheet), ensuring organized, accessible records. Detailed Workflow Explanation Step 1: Invoice Detection** Monitors Google Drive for newly uploaded PDF invoices. Step 2: PDF to Image Conversion** Converts PDFs into images using ILovePDF. Step 3: Data Extraction via Gemini AI** Uses Gemini AI to analyze the invoice image. Extracts data such as Vendor, Description, Amount, Invoice Date, Paid Date, and Bank Reference. Provides clear descriptions even when original invoice descriptions are vague or missing by analyzing vendor context. Step 4: Structured Data Storage** Automatically sends extracted data to Airtable or Google Sheets. Step 5: File Management** Moves processed PDF files into a separate "Done" folder to clearly differentiate between processed and unprocessed invoices. Step-by-Step Setup Instructions Set Up Google Drive: Log in to Google Drive and create two folders: One named Invoices (for incoming PDF files) One named Processed (for processed files) Obtain API Credentials: ILovePDF API: Sign up at ILovePDF Developers. Retrieve your API key from your account dashboard. Google Gemini AI API: Register at Google AI and generate an API key. Airtable Database Preparation: Create an Airtable base with the following columns: Vendor (Text) Description (Text) Amount (Number or Text) Invoice Date (Date) Paid Date (Date) Bank Reference (Text) Import and Configure Workflow in n8n: Import the provided workflow JSON file into your n8n instance. Connect your Google Drive, ILovePDF, Google Gemini AI, and Airtable accounts by entering your credentials in their respective nodes. Adjust Workflow Settings: In the Google Drive nodes, ensure your newly created Invoices and Processed folders are correctly selected. Update the ILovePDF public key in the appropriate HTTP Request node. Customize the Gemini AI prompt to refine or expand data extraction according to your specific needs. Testing Your Setup: Upload a sample PDF invoice into the Invoices folder. Execute the workflow by clicking Test Workflow in n8n and verify if data extraction and Airtable logging operate correctly. Airtable Column Specifications Ensure your Airtable includes the following structure: Vendor**: Single Line Text Description**: Single Line Text Amount**: Currency or Single Line Text Invoice Date**: Date (formatted as YYYY-MM-DD) Paid Date**: Date (formatted as YYYY-MM-DD) Bank Reference**: Single Line Text How to Customize the Workflow System Prompt:** Adjust the AI instructions by modifying the prompt text to focus on additional or fewer invoice details. Structured Output Parser:** Modify the JSON schema in the parser node to match the structure and data points your project specifically requires: By following these instructions, you’ll have a fully automated, reliable system for handling and logging invoice data, significantly enhancing your productivity.
by Robert Breen
Extract Local Business Contacts with Google Sheets, SerpAPI & GPT‑4o Status: Ready for Use ✅ Disclaimer: This workflow relies on community nodes that are not part of n8n’s core package. Install the following from n8n → Community Nodes before running: n8n-nodes-langchain** n8n-nodes-openai** (Structured Output Parser) n8n-nodes-apify** 📝 Description This n8n workflow automates discovery of local‑business contact details by search term and location, then enriches the results with publicly listed email addresses using GPT‑4o AI. 🔑 Key Features 🔗 Google Sheets Integration Reads search terms and locations from a Google Sheet. Processes only rows that are not marked Complete, preventing duplicates. 🗺️ Google Maps Search via SerpAPI Queries Google Maps through SerpAPI for every search‑term‑and‑location pair. Retrieves the following fields: business name, website, street address, and phone number. 🧠 Website Scraping & Email Extraction Scrapes the business homepage content with Apify’s Fast Website Content Crawler. Sends the scraped HTML to a GPT‑4o AI Agent. Extracts any publicly listed email address. Returns a clean, structured JSON object for downstream use. 💾 Data Storage & Tracking Writes every result to a Results tab in the same Google Sheet. Marks the corresponding row in the Searches tab as Complete once finished. 🧱 Extensible Design The workflow uses modular sub‑workflows and AI agents. You can easily extend it to add: Phone‑number verification with Twilio Social‑media enrichment with Clearbit Exports to HubSpot, Salesforce, Airtable, PostgreSQL, or CSV files 📄 Google Sheet Setup Create a Searches tab with these exact columns (one header row): Search | Area | Area Name | Complete Create a results tab with these columns title | website | address | phone | Search | Search Name | Area | email (Manual Entry) ⚙️ Prerequisites Google Cloud Project with Google Sheets API and Google Drive API enabled SerpAPI account (free trial or paid) – obtain an API key Apify account (free trial or paid) with the Fast Website Content Crawler actor installed OpenAI account with an API key that can access GPT‑4o models 🚀 Setup Instructions Copy the Google Sheet Make a personal copy of the template sheet. Ensure the tab names are Searches and Results. https://docs.google.com/spreadsheets/d/1QgcVMlXRlM_5ZFFUHr6bVK-93Tzia9XseTX03ZYnowI/edit?usp=sharing Configure Google Sheets nodes in n8n Open the workflow. Update the nodes Extract Search Terms and Save Emails to Sheet to point at your copied sheet. Authenticate using Google OAuth2 credentials that have access to the sheet. Add SerpAPI credentials Sign in at <https://serpapi.com>. Copy your API key. In the Search Google Maps node, create a new credential and paste the key. Set up Apify Sign up at <https://apify.com>. Add the Fast Website Content Crawler actor to your account. In the Scrape Web Page HTTP node, append ?token=YOUR_API_KEY to the actor URL. Add your OpenAI API key Go to <https://platform.openai.com>. Generate an API key. Add it to the AI Agent and OpenAI Chat Model node credentials. ✅ Running the Workflow Click Execute Workflow in n8n. For each unprocessed row in the Searches tab, the automation will: Retrieve business information from Google Maps via SerpAPI. Scrape the business website using Apify. Use GPT‑4o to extract a public email address. Write all collected data to the Results tab. Mark the original row as Complete. 🧩 Example Use Cases Build highly targeted lead lists for sales and marketing outreach. Compile local business directories for regional websites or apps. Automate contact‑information collection for lead‑generation campaigns and reduce manual data entry. 🤝 Connect with Me Description I’m Robert Breen, founder of Ynteractive — a consulting firm that helps businesses automate operations using n8n, AI agents, and custom workflows. I’ve helped clients build everything from intelligent chatbots to complex sales automations, and I’m always excited to collaborate or support new projects. If you found this workflow helpful or want to talk through an idea, I’d love to hear from you. Links 🌐 Website: https://www.ynteractive.com 📺 YouTube: @ynteractivetraining 💼 LinkedIn: https://www.linkedin.com/in/robert-breen 📬 Email: rbreen@ynteractive.com
by Audun
Who is this for? Security professionals Developers Individuals interested in data breach awareness Use Case Automated monitoring for new breaches Proactive identity protection Demonstration of simple cache mechanism What this workflow does Checks the Have I Been Pwned API every 15 minutes for the latest breaches. Compares new breach data against previously notified breaches. Demonstrates a simple cache mechanism to track previously seen breaches. How the Cache Functionality Works Read from Cache**: Retrieves the last known breach from cache.json to avoid redundant alerts for the same breach. Compare Against Current Breach**: The workflow checks if the latest fetched breach differs from the cached one. Update the Cache**: If a new breach is detected, it updates cache.json with the latest breach data. Setup instructions The endpoint used in this workflow does not require an API key. Add your desired alert mechanism in the red box attached to the New breach node. How to customize this workflow to your needs Modify Notification Settings**: Tailor where alerts are sent (email, Slack, etc.). Add the desired node after the New breach node. This node contains all the data from the breach so it is eaisily available. You can choose from a variety of n8n nodes to send alerts when a new breach is detected. Below are a few common options you might consider adding after the New breach node: Email Node What it does: Sends an email notification to one or more recipients. Use case: Great for simple alerts to your inbox or a team distribution list. Customization: You can include breach details in the subject or body of the email, using data from the New breach node. Slack Node What it does: Sends a message to a Slack channel or user. Use case: Perfect for real-time alerts to your team in Slack. Customization: You can post breach details directly in a channel or DM. You can also format the message (bold, code blocks, etc.). Microsoft Teams Node What it does: Sends a message to a Teams channel. Use case: For organizations that use Microsoft Teams for communication. Customization: Similar to Slack, you can customize the message content and include all relevant breach information. Discord Node What it does: Sends an alert message to a Discord channel. Use case: Useful for teams or communities that coordinate via Discord. Customization: Add formatted messages with breach details for easy viewing. Telegram Node What it does: Sends messages to a Telegram chat or group. Use case: Good for mobile notifications and fast alerts. Customization: You can include breach summaries or detailed information, and even use bots to automate this. Webhook Node (as a sender) What it does: Sends breach data to another service via a webhook. Use case: If you have an external system or app that handles alerts, you can push the data directly to it. Customization: Send JSON payloads with detailed breach information to trigger actions in other systems. SMS Nodes (like Twilio) What it does: Sends an SMS notification to one or more phone numbers. Use case: For urgent alerts that need to be seen immediately. Customization: Keep messages concise, including key breach details like the time, type of breach, and affected system. Adjust Check Frequency**: Change the interval in the Schedule Trigger node (e.g., hourly or daily).
by Jimleuk
If you have a shared or personal drive location with a high frequency of files created by humans, it can become difficult to organise. This may not matter... until you need to search for something! This n8n workflow works with the local filesystem to target the messy folder and categorise as well as organise its files into sub directories automatically. Disclaimer Unfortunately due to the intended use-case, this workflow will not work on n8n Cloud and a self-hosted version of n8n is required. How it works Uses the local file trigger to activate once a new file is introduced to the directory The new file's filename and filetype are analysed using AI to determine the best location to move this file. The AI assess the current subdirectories as to not create duplicates. If a relevant subdirectory is not found, a new subdirectory is suggested. Finally, an Execute Command node uses the AI's suggestions to move the new file into the correct location. Requirements Self-hosted version of n8n. The nodes used in this workflow only work in the self-hosted version. If you are using docker, you must create a bind mount to a host directory. Mistral.ai account for LLM model Customise this workflow If the frequency of files created is high enough, you may not want the trigger to active on every new file created event. Switch to a timer to avoid concurrency issues. Want to go fully local? A version of this workflow is available which uses Ollama instead. You can download this template here: https://drive.google.com/file/d/1iqJ_zCGussXpfaUBYGrN5opziEFAEQMu/view?usp=sharing
by Sarfaraz Muhammad Sajib
📧 Email Validation Workflow Using APILayer API This n8n workflow enables users to validate email addresses in real time using the APILayer Email Verification API. It's particularly useful for preventing invalid email submissions during lead generation, user registration, or newsletter sign-ups, ultimately improving data quality and reducing bounce rates. ⚙️ Step-by-Step Setup Instructions Trigger the Workflow Manually: The workflow starts with the Manual Trigger node, allowing you to test it on demand from the n8n editor. Set Required Fields: The Set Email & Access Key node allows you to enter: email: The target email address to validate. access_key: Your personal API key from apilayer.net. Make the API Call: The HTTP Request node dynamically constructs the URL: https://apilayer.net/api/check?access_key={{ $json.access_key }}&email={{ $json.email }} It sends a GET request to the APILayer endpoint and returns a detailed response about the email's validity. (Optional): You can add additional nodes to filter, store, or react to the results depending on your needs. 🔧 How to Customize Replace the manual trigger with a webhook or schedule trigger to automate validations. Dynamically map the email and access_key values from previous nodes or external data sources. Add conditional logic to filter out invalid emails, log them into a database, or send alerts via Slack or Email. 💡 Use Case & Benefits Email validation is crucial in maintaining a clean and functional mailing list. This workflow is especially valuable in: Sign-up forms where real-time email checks prevent fake or disposable emails. CRM systems to ensure user-entered emails are valid before saving them. Marketing pipelines to minimize email bounce rates and increase campaign deliverability. Using APILayer’s trusted validation service, you can verify whether an email exists, check if it’s a role-based address (like info@ or support@), and identify disposable email services—all with a simple workflow. Keywords: email validation, n8n workflow, APILayer API, verify email, real-time email check, clean email list, reduce bounce rate, data accuracy, API integration, no-code automation
by Lucas Peyrin
How it works This template is an interactive, hands-on tutorial designed to demystify what an API is and how it works, right inside your n8n canvas. It uses a simple restaurant analogy to explain the core concepts: You* are the "Client" (an *HTTP Request** node). The Kitchen is the "Server" (a Webhook node). The API is the Menu and the Waiter—the set of rules for how you can ask for things and get a response. The workflow is a series of self-contained lessons. Each lesson pairs an HTTP Request node (the customer placing an order) with a Webhook node (the kitchen receiving and responding to the order) to demonstrate a key concept: The Basics: Making a simple GET request to a URL. Customizing: Using Query Parameters to filter or modify your request. Sending Data: Using the POST method and a Body to send information to the server. Identification: Using Headers and simple Authentication to prove who you are. Handling Delays: Understanding how Timeouts prevent your workflow from getting stuck. Set up steps Setup time: < 1 minute This workflow is a self-contained tutorial and requires no external services or credentials. You may want to check the Base URL. Click "Execute Workflow" to run the entire tutorial. Follow the flow from top to bottom, exploring each "Lesson". For each lesson, click on the HTTP Request node and its corresponding Webhook node to see how they are configured and what they do. Read the sticky notes next to each lesson—they contain the core explanations! That's it! Explore and have fun learning the fundamentals of APIs in an interactive way.
by Davide
This Workflow simulates an AI-powered phone agent with two main functions: 📅 Appointment Booking – It can schedule appointments directly into Google Calendar. 🧠 RAG-based Information Retrieval – It provides answers using a Retrieval-Augmented Generation (RAG) system. For example, it can respond to questions such as store opening hours, return policies, or product details. The guide also explains how to purchase a dedicated phone number (with a +1 prefix) and link it to the AI agent. This setup is cost-effective, as it uses a FREE $10 credit to operate without additional charges in the beginning. ✨ Advantages 🕐 24/7 Availability** – The AI agent can answer calls and assist customers at any time. 🤖 Automation** – It reduces the workload on human staff by handling repetitive tasks like appointment scheduling and FAQ responses. 🔌 Easy Integration** – Built with n8n, it’s flexible and customizable for various platforms and tools. 💸 Low-cost Setup** – Using the free credit, businesses can get started without an upfront investment. 📦 Use Cases 🛍 E-commerce** – Answer common product questions or order inquiries. 🏬 Retail Stores** – Provide store hours, address info, and return policies. 🍽 Restaurants** – Take reservations or share menu information. 💼 Service Providers** – Book appointments or consultations. 📞 Any Local Business** – Offer phone support without needing a live operator. How It Works This Workflow simulates an AI-powered phone agent with two primary functions: Appointment Booking The workflow captures call events (e.g., call_ended or call_analyzed) and extracts key details (transcript, caller info, duration, etc.). Using OpenAI, it summarizes the conversation and parses structured data (e.g., names, contact info, dates). For scheduling, it converts user-provided dates into Google Calendar-compatible formats and creates events automatically. RAG-Based Information Retrieval When a query is received (e.g., store hours, product details), the workflow retrieves relevant information from a Qdrant vector store. An AI agent processes the query using the retrieved data and responds via a webhook, ensuring accurate, context-aware answers. Set Up Steps Prepare Qdrant Vector Store Create/refresh a Qdrant collection (via HTTP requests). Upload and vectorize documents (e.g., from Google Drive) using OpenAI embeddings. Configure RetellAI Agent Sign up for RetellAI, create an agent, and set the webhook URLs (n8n_call for call events, n8n_rag_function for RAG queries). Purchase a Twilio phone number and link it to the agent. n8n Workflow Setup Connect OpenAI, Qdrant, Google Calendar, and Telegram nodes with credentials. Customize prompts for summarization, date parsing, and RAG responses. Test the workflow to ensure data flows from call events → processing → actions (e.g., calendar bookings, Telegram alerts). Deploy Trigger the workflow via RetellAI webhooks during calls. Monitor outputs (e.g., call summaries in Telegram, calendar events). Note: Replace placeholders (e.g., QDRANTURL, COLLECTION, CHAT_ID) with actual values. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Jimleuk
This n8n workflow demonstrates creating a recipe recommendation chatbot using the Qdrant vector store recommendation API. Use this example to build recommendation features in your AI Agents for your users. How it works For our recipes, we'll use HelloFresh's weekly course and recipes for data. We'll scrape the website for this data. Each recipe is split, vectorised and inserted into a Qdrant Collection using Mistral Embeddings Additionally the whole recipe is stored in a SQLite database for later retrieval. Our AI Agent is setup to recommend recipes from our Qdrant vector store. However, instead of the default similarity search, we'll use the Recommendation API instead. Qdrant's Recommendation API allows you to provide a negative prompt; in our case, the user can specify recipes or ingredients to avoid. The AI Agent is now able to suggest a recipe recommendation better suited for the user and increase customer satisfaction. Requirements Qdrant vector store instance to save the recipes Mistral.ai account for embeddings and LLM agent Customising the workflow This workflow can work for a variety of different audiences. Try different sets of data such as clothes, sports shoes, vehicles or even holidays.
by Keith Rumjahn
Case Study I'm too lazy to record every transaction for my expense tracking. Since all my expenses are digital, I just extract the transactions from bank PDF statements and screenshots into CSV to import into my budgeting software. Read more -> How I used A.I. to track all my expenses What this workflow does Upload your PDF or screenshots into Google Drive It then passes the PDF/image to Vertex Gemini to do some A.I. image recognition It then sends the transactions as CSV and stores it into another Google Drive folder Setup Set up 2 google drive folders. 1 for uploading and 1 for the output. Input your Google Drive crendtials Input your Vertex Gemini credentials How to adjust it to your needs You can upload other types of documents for information extraction. You can extract any text data from any image or PDF You can adjust the A.I. prompt to do different things
by Davide
The "Voice RAG Chatbot with ElevenLabs and OpenAI" workflow in n8n is designed to create an interactive voice-based chatbot system that leverages both text and voice inputs for providing information. Ideal for shops, commercial activities and restaurants How it works: Here's how it operates: Webhook Activation: The process begins when a user interacts with the voice agent set up on ElevenLabs, triggering a webhook in n8n. This webhook sends a question from the user to the AI Agent node. AI Agent Processing: Upon receiving the query, the AI Agent node processes the input using predefined prompts and tools. It extracts relevant information from the knowledge base stored within the Qdrant vector database. Knowledge Base Retrieval: The Vector Store Tool node interfaces with the Qdrant Vector Store to retrieve pertinent documents or data segments matching the user’s query. Text Generation: Using the retrieved information, the OpenAI Chat Model generates a coherent response tailored to the user’s question. Response Delivery: The generated response is sent back through another webhook to ElevenLabs, where it is converted into speech and delivered audibly to the user. Continuous Interaction: For ongoing conversations, the Window Buffer Memory ensures context retention by maintaining a history of interactions, enhancing the conversational flow. Set up steps: To configure this workflow effectively, follow these detailed setup instructions: ElevenLabs Agent Creation: Begin by creating an agent on ElevenLabs (e.g., named 'test_n8n'). Customize the first message and define the system prompt specific to your use case, such as portraying a character like a waiter at "Pizzeria da Michele". Add a Webhook tool labeled 'test_chatbot_elevenlabs' configured to receive questions via POST requests. Qdrant Collection Initialization: Utilize the HTTP Request nodes ('Create collection' and 'Refresh collection') to initialize and clear existing collections in Qdrant. Ensure you update placeholders QDRANTURL and COLLECTION accordingly. Document Vectorization: Use Google Drive integration to fetch documents from a designated folder. These documents are then downloaded and processed for embedding. Employ the Embeddings OpenAI node to generate embeddings for the downloaded files before storing them into Qdrant via the Qdrant Vector Store node. AI Agent Configuration: Define the system prompt for the AI Agent node which guides its behavior and responses based on the nature of queries expected (e.g., product details, troubleshooting tips). Link necessary models and tools including OpenAI language models and memory buffers to enhance interaction quality. Testing Workflow: Execute test runs of the entire workflow by clicking 'Test workflow' in n8n alongside initiating tests on the ElevenLabs side to confirm all components interact seamlessly. Monitor logs and outputs closely during testing phases to ensure accurate data flow between systems. Integration with Website: Finally, integrate the chatbot widget onto your business website replacing placeholder AGENT_ID with the actual identifier created earlier on ElevenLabs. By adhering to these comprehensive guidelines, users can successfully deploy a sophisticated voice-driven chatbot capable of delivering precise answers utilizing advanced retrieval-augmented generation techniques powered by OpenAI and ElevenLabs technologies.
by Ventsislav Minev
Google Drive Duplicate File Manager 🧹📁 Purpose: Automate the process of finding and managing duplicate files in your Google Drive. Who's it for? Individuals and teams aiming to streamline their Google Drive. Anyone tired of manual duplicate file cleanup. What it Solves: Saves storage space 💾. Reduces file confusion 😕➡️🙂. Automates tedious cleanup tasks 🤖. How it works: Trigger: Monitors a Google Drive folder for new files. Configuration: Sets rules for keeping and handling duplicates. Find Duplicates: Identifies duplicate files based on their content (MD5Checksum). Action: Either moves duplicates to trash or renames them. Setup Guide: Google Drive Trigger ⏰: Set up the trigger to watch a specific folder or your entire drive (use caution with the root folder! ⚠️). Configure the polling interval (default: every 15 minutes). Config Node ⚙️: keep: Choose whether to keep the "first" or "last" uploaded file (default: "last"). action: Select "trash" to delete duplicates or "flag" to rename them with "DUPLICATE-" (default: "flag"). owner & folder: Taken from the trigger. Only change if needed. Key Considerations: Google Drive API limits:** Be mindful of API usage. Folder Scope:* The workflow handles one folder depth by default. (WARNING: If configured to work with the Root folder / all files in all sub-directories are processed so *USE THIS OPTION WITH CAUTION** since the workflow might trash/rename important files) Google Apps:** Google docs are ignored since they are not actual binary-files and their content can't be compared. Enjoy your clean Google Drive! ✨