by Angel Menendez
Who is this for? This workflow is for professionals and teams who want to automate LinkedIn message replies with intelligent, human-like responses — without losing control over tone or accuracy. Ideal for founders, sales teams, DevRel, or community managers handling high-volume inbound messages. What problem is this workflow solving? Responding to every LinkedIn message manually is slow and inconsistent. Basic AI bots generate replies without context or nuance. This subworkflow solves both problems by using structured message routing from Notion and profile insights from UniPile to craft smart, context-aware responses. What this workflow does This workflow takes the sender’s message and profile (from LinkedIn Auto Message Router with Request Detection) and references your centralized Notion database of message types. It uses that to either match the message to a known response or generate a new one using OpenAI's GPT model — all while following professional tone guidelines. This is the third workflow in a 3-part automation system: Receives data from LinkedIn Auto Message Router with Request Detection Uses UniPile LinkedIn Profile Lookup Subworkflow to enrich responses based on follower count or org data Example Use Case If a message comes from someone with low reach (e.g., under 1,000 followers), the AI politely deflects a meeting request. If an influencer reaches out, the AI immediately offers a booking link. Your team controls this logic by updating the Notion database — no edits to the workflow required. Setup Connect this workflow as a subworkflow in your router or Slack approval flow Store your Notion API key and database ID in n8n Provide the following parent inputs: message – The LinkedIn message text sender – Name of the sender chatid – Session ID (optional for memory) linkedinprofile – Enriched array with LinkedIn context (follower count, connection info, etc.) Add your preferred AI model credentials (supports OpenAI, Gemini, or Ollama) Optional: Customize system prompt to better match your brand voice How to customize this workflow to your needs Update the Notion schema to include industry-specific categories or actions Change the AI tone (e.g., humorous, more corporate, etc.) Add conditional logic for auto-sending messages without Slack approval Extend to support multiple platforms (e.g., email, X/Twitter, Instagram DMs)
by Zain Ali
🧠 Email real time RAG Assistant with Gmail, OpenAI & PGVector 📌 Who’s it for This workflow is ideal for: Professionals Project managers Sales and support teams Anyone managing high volumes of Gmail messages It enables fast and intelligent search through your email inbox using natural language queries. ⚙️ How it works / What it does Continuously monitors your Gmail inbox for new emails. Extracts email content and metadata (subject, body, sender, date). Converts email content into vector embeddings using OpenAI. Stores embeddings in a PostgreSQL database with PGVector. A conversational AI agent performs semantic search on your stored email history. Supports time-sensitive and context-aware responses via OpenAI Chat model. 🚀 How to set up Connect your Gmail account to the Gmail Trigger node (with API access enabled). Configure OpenAI credentials for the Embedding and Chat nodes. Set up a PostgreSQL database with the PGVector extension enabled. Import the workflow into your n8n instance (Cloud or Self-hosted). Customize parameters like polling frequency, embedding settings, or vector query depth. 📋 Requirements ✅ n8n instance (Self-hosted or Cloud) ✅ Gmail account with API access ✅ OpenAI API Key ✅ PostgreSQL database with PGVector extension installed 🛠️ How to customize the workflow Email Filtering**: Change filters in the Gmail Trigger to watch specific labels or senders. Text Splitting Granularity**: Adjust chunkSize and chunkOverlap in the text splitter node. Query Depth**: Modify topK in the vector search node to retrieve more or fewer similar results. Prompt Tuning**: Customize the system message or agent instructions in the RAG node. Workflow Extensions**: Add notifications, error logging, Slack/Telegram alerts, or data exports.
by Airtop
Recursive Web Scraping Use Case Automating web scraping with recursive depth is ideal for collecting content across multiple linked pages—perfect for content aggregation, lead generation, or research projects. What This Automation Does This automation reads a list of URLs from a Google Sheet, scrapes each page, stores the content in a document, and adds newly discovered links back to the sheet. It continues this process for a specified number of iterations based on the defined scraping depth. Input Parameters: Seed URL: The starting URL to begin the scraping process. Example: https://example.com/ Links must contain: Restricts the links to those that contain this specified string. Example: https://example.com/ Depth: The number of iterations (layers of links) to scrape beyond the initial set. Example: 3 How It Works Starts by reading the Seed URL from the Google Sheet. Scrapes each page and saves its content to the specified document. Extracts new links from each page that match the Links must contain string, appends them to the Google Sheet. Repeats steps 2–3 for the number of times specified by Depth - 1. Setup Requirements Airtop API Key — free to generate. Credentials set up for Google Docs (requires creating a project on Google Console). Read how to. Credentials set up for Google Spreadsheet. Next Steps Add Filtering Rules**: Filter which links to follow based on domain, path, or content type. Combine with Scheduler**: Run this automation on a schedule to continuously explore newly discovered pages. Export Structured Data**: Extend the process to store extracted data in a CSV or database for analysis. Read more about website scraping for LLMS
by David Roberts
AI evaluation in n8n This is a template for n8n's evaluation feature. Evaluation is a technique for getting confidence that your AI workflow performs reliably, by running a test dataset containing different inputs through the workflow. By calculating a metric (score) for each input, you can see where the workflow is performing well and where it isn't. How it works This template shows how to calculate a workflow evaluation metric: whether a category matches the expected one. The workflow takes support tickets and generates a category and priority, which is then compared with the correct answers in the dataset. We use an evaluation trigger to read in our dataset It is wired up in parallel with the regular trigger so that the workflow can be started from either one. More info Once the category is generated by the agent, we check whether it matches the expected one in the dataset Finally we pass this information back to n8n as a metric
by InfraNodus
Set Up ElevenLabs Voice Chat Agent using Graph RAG Knowledge Graphs as Experts This workflow creates an AI voice chatbot agent that has access to several knowledge bases at the same time (used as "experts"). These knowledge bases are provided using the InfraNodus GraphRAG using the knowledge graphs and providing high-quality responses without the need to set up complex RAG vector store workflows. We use ElevenLabs to set up a voice agent that can be embedded to any website or used via their API. The advantages of using GraphRAG instead of the standard vector stores for knowledge are: Easy and quick to set up (no complex data import workflows needed) and to update with new knowledge A knowledge graph has a holistic overview of your knowledge base Better retrieval of relations between the document chunks = higher quality responses Ability to reuse in other n8n workflows How it works This template uses the n8n AI agent node as an orchestrating agent that decides which tool (knowledge graph) to use based on the user's prompt. The user's prompt is received from the ElevenLabs Conversational AI agent via an n8n Webhook, which also takes care of the voice interaction. The response from n8n is then sent to the Webhook, which is polled by the ElevenLabs voice agent. This agent processes the response and provides the final answer. Here's a description step by step: The user submits a question using ElevenLabs voice interface The question is sent via the knowledge_base tool in ElevenLabs to the n8n Webhook with the POST request containing the user's prompt and sessionID for Chat Memory node in n8n. The n8n AI agent node checks a list of tools it has access to. Each tool has a description of the knowledge auto-generated by InfraNodus (we call each tool an "expert"). The n8n AI agent decides which tool should be used to generate a response. It may reformulate user's query to be more suitable for the expert. The query is then sent to the InfraNodus HTTP node endpoint, which will query the graph that corresponds to that expert. Each InfraNodus GraphRAG expert provides a rich response that takes the whole context into account and provides a response from each expert (graph) along with a list of relevant statements retrieved using a combination or RAG and GraphRAG. The n8n AI Agent node integrates the responses received from the experts to produce the final answer. The final answer is sent back to the Webhook endpoint ElevenLabs conversational AI agent picks up the response arriving from the knowledge_base tool via the webhook and then condenses it for conversational format and transforms text into voice. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Create a separate knowledge graph for each expert (using PDF / content import options) in InfraNodus For each graph, go to the workflow, paste the name of the graph into the body name field. Keep other settings intact or learn more about them at the InfraNodus access points page. Once you add one or more graphs as experts to your flow, add the LLM key to the OpenAI node and launch the workflow You will also need to set up an ElevenLabs account and to set up a conversational AI agent there. See the Post note in the n8n workflow for a complete step-by-step description or our support article on setting up ElevenLabs AI voice agent Once the voice AI agent is ready, you might want to combine it with a text AI chatbot workflow so your users have a choice between the text and voice interaction. In that case, you may be interested to use our free open-source website popup chat widget popupchat.dev where you can create an embed code to add to your blog or website and allow the user to choose between the text and voice interaction. Requirements An InfraNodus account and API key An OpenAI (or any other LLM) API key An ElevenLabs account FAQ 1. How many "experts" should I aim for? We recommend to aim for the number of experts as the optimal number of people in a team, which is usually 2-7. If you add more experts, your AI orchestrating agent will have troubles choosing the most suitable "expert" tool for the user's query. You can mitigate this by specifying in the AI agent description that it can choose maximum 3-7 experts to provide a response. 2. Why use InfraNodus GraphRAG and not standard vector store for knowledge? First, vector stores are complex to set up and to update. You'd need a separate workflow for that, decide on the vector dimensions, add metadata to your knowledge, etc. With InfraNodus, you have a complete RAG / GraphRAG solution under the hood that is easy to set up and provides high-quality responses that takes the overall structure and the relations between your ideas into account. 3 Why not use ElevenLabs' own knowledge? One of the reasons is that you want your knowledge base to be in one place so you can reuse it in other n8n workflows. Another reason is that you will not have such a good separation between the "experts" when you converse with the agent. So the answers you get will be based on top matches from all the books / articles you upload, while with the InfraNodus GraphRAG setup you can better control which graphs are consulted as experts and have an explicit way to display this data. Customizing this workflow You can use this same workflow with a Telegram bot, so you can interact with it using Telegram. There are many more customizations available on our GitHub repo for n8n workflows. Check out the complete setup guide for this workflow at https://support.noduslabs.com/hc/en-us/articles/20318967066396-How-to-Build-a-Text-Voice-AI-Agent-Chatbot-with-n8n-Elevenlabs-and-InfraNodus Also check out the video tutorial with a demo:
by Nick Saraev
AI Upwork Application Agent with OpenAI & Google Docs Categories: AI Agents, Freelance Automation, Proposal Generation This workflow creates an intelligent AI agent that automates Upwork job applications by generating highly personalized proposals, professional Google Doc presentations, and visual workflow diagrams. Built by someone who earned over $500,000 on Upwork, this system demonstrates the exact templates and strategies that achieve superior response rates through perceived customization and value demonstration. Benefits Complete Application Automation** - Transform job descriptions into custom proposals, documents, and diagrams in minutes Proven Templates** - Based on $500K+ in Upwork earnings using exact strategies for high-converting applications Intelligent Personalization** - AI analyzes job requirements and customizes responses with relevant social proof Professional Asset Generation** - Creates Google Doc proposals and Mermaid workflow diagrams for enhanced perceived value Modular Architecture** - Three specialized sub-workflows handle different aspects of proposal generation High Response Rates** - Focuses on perceived customization and value demonstration over generic applications How It Works AI Agent Orchestration: Receives Upwork job descriptions through chat interface Maintains conversation context with window buffer memory Coordinates three specialized sub-workflows for comprehensive proposal generation Automatically integrates generated assets into cohesive application packages Application Copy Generation: Uses proven templates based on $500K+ Upwork success Follows structure: "Hi, I do [thing] all the time. So confident I created a demo: [link]" Incorporates personal social proof and achievements automatically Generates concise, spartan-toned applications that avoid generic AI language Google Doc Proposal Creation: Copies professional proposal template from Google Drive Generates structured content including system title, explanation, scope, and timeline Uses find-and-replace to populate template with AI-generated, personalized content Creates shareable documents with proper permissions for immediate client access Mermaid Diagram Visualization: Analyzes job requirements to create relevant workflow diagrams Generates Mermaid.js code for professional flowchart visualization Provides visual representation of proposed solutions Enhances perceived value through custom diagram creation Smart Template Integration: Automatically replaces placeholder text with generated Google Doc links Maintains consistent messaging across all generated assets Ensures cohesive presentation of application, proposal, and supporting materials Required Setup Configuration Personal Information Setup: Update the "aboutMe" variable in both Set Variable nodes with your credentials: Professional background and specializations Notable client achievements with specific revenue numbers Social proof elements (community size, subscriber count, etc.) Relevant project examples with quantified results Google Services Integration: Google Drive API Setup: Enable Google Drive API in Google Cloud Console Create OAuth2 credentials (Client ID and Client Secret) Connect n8n to Google Drive with proper permissions Google Docs Template: Copy the provided Google Docs proposal template to your Drive Update the template ID in the Google Drive node Customize template with your branding and standard language Google Docs API: Ensure Google Docs API is enabled in your Google Cloud project Test document creation and sharing permissions OpenAI API Configuration: Set up OpenAI API credentials across all OpenAI nodes Configure appropriate models (GPT-4O-mini recommended for speed) Set temperature to 0.7 for optimal personalization balance Monitor API usage to control costs Template Customization: Application Template**: Modify the proposal structure in OpenAI prompts to match your services Google Doc Template**: Update the document template with your standard proposal format Personal Details**: Replace all placeholder information with your actual achievements and social proof Business Use Cases Freelance Professionals** - Automate high-quality Upwork applications across multiple job categories Automation Specialists** - Demonstrate capabilities through automated proposal generation itself Service Providers** - Scale application volume while maintaining personalization quality Agency Owners** - Offer proposal automation services to freelance clients Consultants** - Streamline business development with automated custom proposals Content Creators** - Generate professional project proposals with visual workflow representations Revenue Potential This system transforms freelance business development: 10x Application Speed**: Generate comprehensive proposals in minutes vs. hours Higher Response Rates**: Perceived customization and value demonstration increase client engagement Scalable Outreach**: Apply to more jobs with maintained quality through automation Professional Positioning**: Visual diagrams and structured proposals demonstrate expertise Competitive Advantage**: Deliver proposals faster than competitors through intelligent automation Difficulty Level: Advanced Estimated Build Time: 3-4 hours Monthly Operating Cost: ~$30 (OpenAI + Google APIs) Watch My Complete Live Build Want to see me build this entire system from scratch? I walk through every component live - including the AI agent setup, prompt engineering strategies, Google Docs integration, and all the debugging that goes into creating a production-ready freelance automation system. 🎥 See My Live Build Process: "I Built An AI Agent That Automates Upwork ($500K+ Earned)" This comprehensive tutorial shows the real development process - including advanced prompt engineering, modular workflow design, and the exact business strategies that generated $500K+ in Upwork revenue. Set Up Steps AI Agent Foundation: Configure chat trigger and AI agent node with OpenAI integration Set up window buffer memory for conversation context Define system message with clear agent instructions and behavior rules Sub-Workflow Creation: Build three specialized workflows: Application Copy, Google Doc Proposal, Mermaid Code Configure execute workflow triggers for each sub-workflow Set up proper data passing between agent and sub-workflows Google Services Configuration: Create Google Cloud Console project with Drive and Docs APIs enabled Set up OAuth2 credentials and connect to n8n Copy and customize the proposal template document Personalization Setup: Update all "aboutMe" variables with your specific achievements and social proof Customize prompt templates to match your service offerings and communication style Test individual sub-workflows with sample job descriptions Agent Tool Integration: Connect sub-workflows as tools in the main AI agent Configure proper tool descriptions and response property names Test complete agent functionality with realistic job posting scenarios Template Optimization: Refine proposal templates based on your specific service offerings Adjust AI prompts for optimal personalization and response quality Test with various job types to ensure consistent quality output Advanced Optimizations Scale the system with: Job Scraping Integration:** Automatically discover and apply to relevant Upwork jobs Response Tracking:** Monitor application success rates and optimize templates Multi-Platform Support:** Extend to other freelance platforms (Fiverr, Freelancer, etc.) Client Communication:** Automate follow-up sequences for proposal responses Portfolio Integration:** Automatically include relevant portfolio pieces based on job requirements Important Considerations Template Authenticity:** Customize templates significantly to avoid detection as automated Upwork Compliance:** Ensure applications meet platform guidelines and quality standards Personal Branding:** Maintain consistent voice and positioning across all generated content Response Management:** Be prepared to handle increased application volume and client responses Quality Control:** Regularly review and refine generated content for accuracy and relevance Why This System Works The competitive advantage lies in proven strategies: Perceived Customization:** AI generates content that appears manually crafted for each job Value Demonstration:** Visual diagrams and structured proposals show immediate value Speed Advantage:** Deliver comprehensive proposals before competitors finish reading job posts Professional Presentation:** Consistent quality and formatting across all applications Scalable Personalization:** Maintain individual attention at volume through intelligent automation Check Out My Channel For more advanced automation systems and proven freelance business strategies that generate real revenue, explore my YouTube channel where I share the exact methodologies used to build successful automation agencies and scale to $72K+ monthly revenue.
by Yang
Who is this for? This workflow is perfect for eCommerce teams, market researchers, and product analysts who want to track or extract product information from websites that restrict scraping tools. It’s also useful for virtual assistants handling product comparison tasks. What problem is this workflow solving? Many eCommerce and retail sites use dynamic content or anti-bot protections that make traditional scraping methods unreliable. This workflow bypasses those issues by taking a screenshot of the full page, using OCR to extract visible text, and summarizing product information with GPT-4o—all fully automated. What this workflow does This workflow monitors a Google Sheet for new URLs. Once a new link is added, it performs the following steps: Trigger on New URL in Sheet – Watches for new rows added to a Google Sheet. Screenshot URL via Dumpling AI – Sends the URL to Dumpling AI’s screenshot endpoint to capture a full-page image of the product webpage. Save Screenshot to Drive Folder – Uploads the screenshot to a specific Google Drive folder for reference or logging. Extract Text from Screenshot with Dumpling AI – Uses Dumpling AI’s image-to-text endpoint to pull all visible content from the screenshot. Extract Product Info from Screenshot Text with GPT-4o – Sends the extracted raw text to GPT-4o, prompting it to identify structured product information such as product name, price, ratings, deals, and purchase options. Split Each Product Entry – Splits the GPT response (an array of product objects) so each product becomes an individual item for saving. Save Products info to Google Sheet – Appends each product’s structured details to a separate sheet in the same spreadsheet. Setup Google Sheet Create a Google Sheet with at least two sheets: Sheet1 should contain a header row with a column labeled URL. Sheet2 should contain headers: Product Name, price, purchased, ratings, deal, buyingOptions. Connect your Google account in both the trigger and final write-back node. Dumpling AI Sign up at Dumpling AI Create an API key and use it for both HTTP modules: Screenshot URL via Dumpling AI Extract Text from Screenshot with Dumpling AI The screenshot endpoint used is https://app.dumplingai.com/api/v1/screenshot. Google Drive Create a folder for storing screenshots. In the Save Screenshot to Drive Folder node, select the correct folder or provide the folder ID. Make sure permissions allow uploading from n8n. OpenAI Provide an API key for GPT-4o in the Extract Product Info from Screenshot Text with GPT-4o node. The prompt is structured to return structured product listings in JSON format. Split & Save Split Each Product Entry takes the array of product objects from GPT and makes each one a separate execution. Save Products info to Google Sheet writes structured fields into Sheet2 under: Product Name, price, purchased, ratings, deal, buyingOptions. How to customize this workflow Adjust the GPT prompt to return different product fields (e.g., shipping info, product categories). Use a filter node to limit which types of products get written to the final sheet. Add sentiment analysis to analyze review content if available. Replace Google Drive with Dropbox or another file storage app. Notes Make sure you monitor your API usage on both Dumpling AI and OpenAI to avoid rate limits. This setup is great for snapshot-based extraction where scraping is blocked or unreliable.
by assert
Who this template is for This template is for every engineer who wants to automate their code reviews or just get a 2nd opinion on their PR. How it works This workflow will automatically review your changes in a Gitlab PR using the power of AI. It will trigger whenever you comment with +0 to a Gitlab PR, get the code changes, analyze them with GPT, and reply to the PR discussion. Set up Steps Set up webhook of note_events in Gitlab repository (see here on how to do it) Configure ChatGPT credentials Note "+0" in MergeRequest to trigger automatic review by ChatGPT
by Yaron Been
🧨 VIP Radar: Instantly Spot & Summarize High-Value Shopify Orders with AI + Slack Alerts Automatically detect when a new Shopify order exceeds $200, fetch the customer’s purchase history, generate an AI-powered summary, and alert your team in Slack—so no VIP goes unnoticed. 🛠️ Workflow Overview | Feature | Description | |------------------------|-----------------------------------------------------------------------------| | Trigger | Shopify “New Order” webhook | | Conditional Check | Filters for orders > $200 | | Data Enrichment | Pulls full order history for the customer from Shopify | | AI Summary | Uses OpenAI to summarize buying behavior | | Notification | Sends detailed alert to Slack with name, order total, and customer insights | | Fallback | Ignores low-value orders and terminates flow | 📘 What This Workflow Does This automation monitors your Shopify store and reacts to any high-value order (over $200). When triggered: It fetches all past orders of that customer, Summarizes the history using OpenAI, Sends a full alert with context to your Slack channel. No more guessing who’s worth a closer look. Your team gets instant insights, and your VIPs get the attention they deserve. 🧩 Node-by-Node Breakdown 🔔 1. Trigger: New Shopify Order Type**: Shopify Trigger Event**: orders/create Purpose**: Starts workflow on new order Pulls**: Order total, customer ID, name, etc. 🔣 2. Set: Convert Order Total to Number Ensures the total_price is treated as a number for comparison. ❓ 3. If: Is Order > $200? Condition**: $json.total_price > 200 Yes** → Continue No** → End workflow 🔗 4. HTTP: Fetch Customer Order History Uses the Shopify Admin API to retrieve all orders from this customer. Requires your Shopify access token. 🧾 5. Set: Convert Orders Array to String Formats the order data so it's prompt-friendly for OpenAI. 🧠 6. LangChain Agent: Summarize Order History Prompt**: "Summarize the customer's order history for Slack. Here is their order data: {{ $json.orders }}" Model**: GPT-4o Mini (customizable) 📨 7. Slack: Send VIP Alert Sends a rich message to a Slack channel. Includes: Customer name Order value Summary of past behavior 🧱 8. No-Op (Optional) Used to safely end workflow if the order is not high-value. 🔧 How to Customize | What | How | |--------------------------|----------------------------------------------------------------------| | Order threshold | Change 200 in the If node | | Slack channel | Update channelId in the Slack node | | AI prompt style | Edit text in LangChain Agent node | | Shopify auth token | Replace shpat_abc123xyz... with your actual private token | 🚀 Setup Instructions Open n8n editor. Go to Workflows → Import → Paste JSON. Paste this workflow JSON. Replace your Shopify token and Slack credentials. Save and activate. Place a test order in Shopify to watch it work. 💡 Real-World Use Cases 🎯 Notify sales team when a potential VIP buys 🛎️ Prep support reps with customer history 📈 Detect repeat buyers and upsell opportunities 🔗 Resources & Support 👨💻 Creator: Yaron Been 📺 YouTube: NoFluff with Yaron Been 🌐 Website: https://nofluff.online 📩 Contact: Yaron@nofluff.online 🏷️ Tags #shopify, #openai, #slack, #vip-customers, #automation, #n8n, #workflow, #ecommerce, #customer-insights, #ai-summaries, #gpt4o
by phil
This workflow automates voice reminders for upcoming appointments by generating a professional audio message and sending it to clients via email with the voice file attached. It integrates Google Calendar to track appointments, ElevenLabs to generate high-quality voice messages, and Gmail to deliver them efficiently. Who Needs Automated Voice Appointment Reminders? This automated voice appointment reminder system is ideal for businesses that rely on scheduled appointments. It helps reduce no-shows, improve client engagement, and streamline communication. Medical Offices & Clinics – Ensure patients receive timely appointment reminders. Real Estate Agencies – Keep potential buyers and renters informed about property visits. Service-Based Businesses – Perfect for salons, consultants, therapists, and coaches. Legal & Financial Services – Help clients remember important meetings and consultations. If your business depends on scheduled appointments, this workflow saves time and enhances client satisfaction. 🚀 Why Use This Workflow? Ensures clients receive timely reminders. Reduces appointment no-shows and scheduling issues. Automates the process with a personalized voice message. Step-by-Step: How This Workflow Automates Voice Reminders Trigger the Workflow – The system runs manually or on a schedule to check upcoming appointments in Google Calendar. Retrieve Appointment Data – It fetches event details (client name, time, and location) from Google Calendar. The workflow uses the summary, start.dateTime, location, and attendees[0].email fields from Google Calendar to personalize and send the voice reminders. Generate a Voice Reminder – Using ElevenLabs, the workflow converts the appointment details into a natural-sounding voice message. Send via Email – The generated audio file is attached to an email and sent to the client as a reminder. Customization: Tailor the Workflow to Your Business Needs Adjust Trigger Frequency – Modify the scheduling to run daily, hourly, or at specific intervals. Customize Voice Message Format – Change the script structure and voice tone to match your business needs. Change Notification Method – Instead of email, integrate SMS or WhatsApp for delivery. 🔑 Prerequisites Google Calendar Access** – Ensure you have access to the calendar with scheduled appointments. ElevenLabs API Key – Required for generating voice messages (you can start for free). Gmail API Access** – Needed for sending reminder emails. n8n Setup** – The workflow runs on an n8n instance (self-hosted or cloud). 🚀 Step-by-Step Installation & Setup Set Up Google Calendar API** Go to Google Cloud Console. Create a new project and enable Google Calendar API. Generate OAuth 2.0 credentials and save them for n8n. Get an ElevenLabs API Key** Sign up at ElevenLabs. Retrieve your API key from the dashboard. Configure Gmail API** Enable Gmail API in Google Cloud Console. Create OAuth credentials and authorize your email address for sending. Deploy n8n & Install the Workflow** Install n8n (Installation Guide). Add the required Google Calendar, ElevenLabs, and Gmail nodes. Import or build the workflow with the correct credentials. Test and fine-tune as needed. ⚠ Important: The LangChain Community node used in this workflow only works on self-hosted n8n instances. It is not compatible with n8n Cloud. Please ensure you are running a self-hosted instance before using this workflow. Summary This workflow ensures a professional and seamless experience for your clients, keeping them informed and engaged. 🚀🔊 Phil | Inforeole
by Hendriekus
Find OAuth URIs with AI Llama Overview: The AI agent identifies: Authorization URI Token URI Audience Methodology: Confidence scoring is utilized to assess the trustworthiness of extracted data: Score Range: 0 < x ≤ 1 Score Granularity: 0.01 increments Model Details: Leveraging the Wayfarer Large 70b Llama 3.3 model. How it works: This template is designed to assist users in obtaining OAuth2 settings using AI-powered insights. It is ideal for developers, IT professionals, or anyone working with APIs that require OAuth2 authentication. By leveraging the AI agent, users can simplify the process of extracting and validating key details such as the authorization_url, token_url, and audience. Set up instructions: 1. Configuration Nodes Structured Output Node**: Parses the AI model's output using a predefined JSON schema. This ensures the data is structured for downstream processing. Code Node**: If the AI model’s output does not match the required format, use the Code node to re-arrange and transform the data. Example code snippets are provided below for common scenarios. 2. AI Model Prompt The prompt for the AI model includes: A detailed structure and objectives of the query. Flexibility for the model to improvise when accurate results cannot be determined. 3. Confidence Scoring The AI model assigns a confidence score (0 < x ≤ 1) to indicate the reliability of the extracted data. Scores are provided in increments of 0.01 for granularity. Adaptability Customize this template: Update the AI model prompt with details specific to your API or OAuth2 setup. Adjust the JSON schema in the Structured Output node to match the data format. Modify the Code logic to suit the application's requirements.
by Adam Bertram
An AI-powered chat assistant that analyzes Azure virtual machine activity and generates detailed timeline reports showing VM state changes, performance metrics, and operational events over time. How It Works The workflow starts with a chat trigger that accepts user queries about Azure VM analysis. A Google Gemini AI agent processes these requests and uses six specialized tools to gather comprehensive VM data from Azure APIs. The agent queries resource groups, retrieves VM configurations and instance views, pulls performance metrics (CPU, network, disk I/O), and collects activity log events. It then analyzes this data to create timeline reports showing what happened to VMs during specified periods, defaulting to the last 90 days unless the user specifies otherwise. Prerequisites To use this template, you'll need: n8n instance (cloud or self-hosted) Azure subscription with virtual machines Microsoft Azure Monitor OAuth2 API credentials Google Gemini API credentials Proper Azure permissions to read VM data and activity logs Setup Instructions Import the template into n8n. Configure credentials: Add Microsoft Azure Monitor OAuth2 API credentials with read permissions for VMs and activity logs Add Google Gemini API credentials Update workflow parameters: Open the "Set Common Variables" node Replace <your azure subscription id here> with your actual Azure subscription ID Configure triggers: The chat trigger will automatically generate a webhook URL for receiving chat messages No additional trigger configuration needed Test the setup to ensure it works. Security Considerations Use minimum required Azure permissions (Reader role on subscription or resource groups). Store API credentials securely in n8n credential store. The Azure Monitor API has rate limits, so avoid excessive concurrent requests. Chat sessions use session-based memory that persists during conversations but doesn't retain data between separate chat sessions. Extending the Template You can add more Azure monitoring tools like disk metrics, network security group logs, or Application Insights data. The AI agent can be enhanced with additional tools for Azure cost analysis, security recommendations, or automated remediation actions. You could also integrate with alerting systems or export reports to external storage or reporting platforms.