by Davide
This workflow allows users to generate AI videos using Google Veo3, save them to Google Drive, generate optimized YouTube titles with GPT-4o, and automatically upload them to YouTube with Upload-Post. The entire process is triggered from a Google Sheet that acts as the central interface for input and output. IT automates video creation, uploading, and tracking, ensuring seamless integration between Google Sheets, Google Drive, Google Veo3, and YouTube. Benefits of this Workflow 💡 No Code Interface**: Trigger and control the video production pipeline from a simple Google Sheet. ⚙️ Full Automation**: Once set up, the entire video generation and publishing process runs hands-free. 🧠 AI-Powered Creativity**: Generates engaging YouTube titles using GPT-4o. Leverages advanced generative video AI from Google Veo3. 📁 Cloud Storage & Backup**: Stores all generated videos on Google Drive for safekeeping. 📈 YouTube Ready**: Automatically uploads to YouTube with correct metadata, saving time and boosting visibility. 🧪 Scalable**: Designed to process multiple video prompts by looping through new entries in Google Sheets. 🔒 API-First**: Utilizes secure API-based communication for all services. How It Works Trigger: The workflow can be started manually ("When clicking ‘Test workflow’") or scheduled ("Schedule Trigger") to run at regular intervals (e.g., every 5 minutes). Fetch Data: The "Get new video" node retrieves unfilled video requests from a Google Sheet (rows where the "VIDEO" column is empty). Video Creation: The "Set data" node formats the prompt and duration from the Google Sheet. The "Create Video" node sends a request to the Fal.run API (Google Veo3) to generate a video based on the prompt. Status Check: The "Wait 60 sec." node pauses execution for 60 seconds. The "Get status" node checks the video generation status. If the status is "COMPLETED," the workflow proceeds; otherwise, it waits again. Video Processing: The "Get Url Video" node fetches the video URL. The "Generate title" node uses OpenAI (GPT-4.1) to create an SEO-optimized YouTube title. The "Get File Video" node downloads the video file. Upload & Update: The "Upload Video" node saves the video to Google Drive. The "HTTP Request" node uploads the video to YouTube via the Upload-Post API. The "Update Youtube URL" and "Update result" nodes update the Google Sheet with the video URL and YouTube link. Set Up Steps Google Sheet Setup: Create a Google Sheet with columns: PROMPT, DURATION, VIDEO, and YOUTUBE_URL. Share the Sheet link in the "Get new video" node. API Keys: Obtain a Fal.run API key (for Veo3) and set it in the "Create Video" node (Header: Authorization: Key YOURAPIKEY). Get an Upload-Post API key (for YouTube uploads) and configure the "HTTP Request" node (Header: Authorization: Apikey YOUR_API_KEY). YouTube Upload Configuration: Replace YOUR_USERNAME in the "HTTP Request" node with your Upload-Post profile name. Schedule Trigger: Configure the "Schedule Trigger" node to run periodically (e.g., every 5 minutes). Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Amjid Ali
Proxmox AI Agent with n8n and Generative AI Integration This template automates IT operations on a Proxmox Virtual Environment (VE) using an AI-powered conversational agent built with n8n. By integrating Proxmox APIs and generative AI models (e.g., Google Gemini), the workflow converts natural language commands into API calls, enabling seamless management of your Proxmox nodes, VMs, and clusters. Buy My Book: Mastering n8n on Amazon Full Courses & Tutorials: http://lms.syncbricks.com Watch Video on Youtube How It Works Trigger Mechanism The workflow can be triggered through multiple channels like chat (Telegram, email, or n8n's built-in chat). Interact with the AI agent conversationally. AI-Powered Parsing A connected AI model (Google Gemini or other compatible models like OpenAI or Claude) processes your natural language input to determine the required Proxmox API operation. API Call Generation The AI parses the input and generates structured JSON output, which includes: response_type: The HTTP method (GET, POST, PUT, DELETE). url: The Proxmox API endpoint to execute. details: Any required payload parameters for the API call. Proxmox API Execution The structured output is used to make HTTP requests to the Proxmox VE API. The workflow supports various operations, such as: Retrieving cluster or node information. Creating, deleting, starting, or stopping VMs. Migrating VMs between nodes. Updating or resizing VM configurations. Response Formatting The workflow formats API responses into a user-friendly summary. For example: Success messages for operations (e.g., "VM started successfully"). Error messages with missing parameter details. Extensibility You can enhance the workflow by connecting additional triggers, external services, or AI models. It supports: Telegram/Slack integration for real-time notifications. Backup and restore workflows. Cloud monitoring extensions. Key Features Multi-Channel Input**: Use chat, email, or custom triggers to communicate with the AI agent. Low-Code Automation**: Easily customize the workflow to suit your Proxmox environment. Generative AI Integration**: Supports advanced AI models for precise command interpretation. Proxmox API Compatibility**: Fully adheres to Proxmox API specifications for secure and reliable operations. Error Handling**: Detects and informs you of missing or invalid parameters in your requests. Example Use Cases Create a Virtual Machine Input: "Create a VM with 4 cores, 8GB RAM, and 50GB disk on psb1." Action: Sends a POST request to Proxmox to create the VM with specified configurations. Start a VM Input: "Start VM 105 on node psb2." Action: Executes a POST request to start the specified VM. Retrieve Node Details Input: "Show the memory usage of psb3." Action: Sends a GET request and returns the node's resource utilization. Migrate a VM Input: "Migrate VM 202 from psb1 to psb3." Action: Executes a POST request to move the VM with optional online migration. Pre-Requisites Proxmox API Configuration Enable the Proxmox API and generate API keys in the Proxmox Data Center. Use the Authorization header with the format: PVEAPIToken=<user>@<realm>!<token-id>=<token-value> n8n Setup Add Proxmox API credentials in n8n using Header Auth. Connect a generative AI model (e.g., Google Gemini) via the relevant credential type. Access the Workflow Import this template into your n8n instance. Replace placeholder credentials with your Proxmox and AI service details. Additional Notes This template is designed for Proxmox 7.x and above. For advanced features like backup, VM snapshots, and detailed node monitoring, you can extend this workflow. Always test with a non-production Proxmox environment before deploying in live systems. Start with n8n Learn n8n with Amjid Get n8n Book What is Proxmox
by Hybroht
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. AI Arena - Debate of AI Agents to Optimize Answers and Simulate Diverse Scenarios Overview Version: 1.0 The AI Arena Workflow is designed to facilitate a refined answer generation process by enabling a structured debate among multiple AI agents. This workflow allows for diverse perspectives to be considered before arriving at a final output, enhancing the quality and depth of the generated responses. ✨ Features Multi-Agent Debate Simulation**: Engage multiple AI agents in a debate to generate nuanced responses. Configurable Rounds and Agents**: Easily adjust the number of debate rounds and participating agents to fit your needs. Contextualized AI Responses**: Each agent operates based on predefined roles and characteristics, ensuring relevant and focused discussions. JSON Output**: The final output is structured in JSON format, making it easy to integrate with other systems or workflows. 👤 Who is this for? This workflow is ideal for developers, data scientists, content creators, and businesses looking to leverage AI for decision-making, content generation, or any scenario requiring diverse viewpoints. It is particularly useful for those who need to synthesize information from multiple personalities or perspectives. 💡 What problem does this solve? The workflow addresses the challenge of generating nuanced responses by simulating a debate among AI agents. This approach ensures that multiple perspectives are considered, reducing bias and enhancing the overall quality of the output. Use-Case examples: 🗓️ Meeting/Interview Simulation ✔️ Quality Assurance 📖 Storywriter Test Environment 🏛️ Forum/Conference/Symposium Simulation 🔍 What this workflow does The workflow orchestrates a debate among AI agents, allowing them to discuss, critique, and suggest rewrites for a given input based on their roles and predefined characteristics. This collaborative process leads to a more refined and comprehensive final output. 🔄 Workflow Steps Input & Setup: The initial input is provided, and the AI environment is configured with necessary parameters. Round Execution: AI agents execute their roles, providing replies and actions based on the input and their individual characteristics. Round Results: The results of each round are aggregated, and a summary is created to capture the key points discussed by the agents. Continue to Next Round: If more rounds are defined, the process repeats until the specified number of rounds is completed. Final Output: The final output is generated based on the agents' discussions and suggestions, providing a cohesive response. ⚡ How to Use/Setup 🔐 Credentials Obtain an API key for the Mistral API or another LLM API. This key is necessary for the AI agents to function properly. 🔧 Configuration Set up the workflow in n8n, ensuring all nodes are correctly configured according to the workflow requirements. This includes setting the appropriate input parameters and defining the roles of each AI agent. This workflow uses a custom node for Global Variables called 'n8n-nodes-globals.' Alternatively, you can use the 'Edit Field (Set)' node to achieve the same functionality. ✏️ Customizing this workflow To customize the workflow, adjust the AI agent parameters in the JSON configuration. This includes defining their roles, personalities, and preferences, which will influence how they interact during the debate. One of the notes includes a ready-to-use example of how to customize the agents and the environment. You can simply edit it and insert it as your credential in the Global Variables node. 📌 Example An example with both input and final output is provided in a note within the workflow. 🛠️ Tools Used n8n: A workflow automation tool that allows users to connect various applications and services. Mistral API: A powerful language model API used for generating AI responses. (You can replace it with any LLM API of your choice) Podman: A container management tool that allows users to create, manage, and run containers without requiring a daemon. (It serves as an alternative to Docker for container orchestration.) ⚙️ n8n Setup Used n8n Version**: 1.100.1 n8n-nodes-globals**: 1.1.0 Running n8n via**: Podman 4.3.1 Operating System**: Linux ⚠️ Notes, Assumptions & Warnings Ensure that the AI agents are configured with clear roles to maximize the effectiveness of the debate. Each agent's characteristics should align with the overall goals of the workflow. The workflow can be adapted for various use cases, including meeting simulations, content generation, and brainstorming sessions. This workflow assumes that users have a basic understanding of n8n and JSON configuration. This workflow assumes that users have access to the necessary API keys and permissions to utilize the Mistral API or other LLM APIs. Ensure that the input provided to the AI agents is clear and concise to avoid confusion in the debate process. Ambiguous inputs may lead to unclear or irrelevant outputs. Monitor the output for relevance and accuracy, as AI-generated content may require human oversight to ensure it meets standards and expectations before being used in production. ℹ️ About Us This workflow was developed by the Hybroht team of AI enthusiasts and developers dedicated to enhancing the capabilities of AI through collaborative processes. Our goal is to create tools that harness the possibilities of AI technology and more.
by InfraNodus
Automated Gmail Labeling and Brainstorming This template can be used to automatically label your incoming Gmail messages with AI and to build a knowledge graph from the emails tagged with a specific label to brainstorm new ideas based on them. You can also get notified about the emails with the most important labels via Telegram as well as receive new ideas as you are building a knowledge graph of incoming messages. The idea generation is based on the InfraNodus knowledge graph content gap detection algorithm, which builds a network from your content and then finds a blind spot and uses AI to generate an interesting research question or idea that can be used to bridge this gap. Why it works so well? Think of all the business emails you receive that bypass the spam filters. Probably, they are personalized to you already. Now imagine if you build a knowledge graph from them for over a month. You will then have a ideation device based on your interests and marketing profile. Now, if you identify the gaps inside and generate interesting research questions based on them, you will come up with new interesting ideas that will be relevant (because they touch on the topics that matter to you), but novel, because they bridge them in new ways. What is it useful for? Automate Gmail incoming message labeling** with the new Classifier n8n node — much more advanced than the default Gmail labeling rules. Get notified via Telegram (or a messenger of your choice) about the most important messages and be sure not to miss anything important. Keep the messages with a certain label saved into knowledge graph for brainstorming and ideation. Every time a new message of this category comes in, it's added into the graph, changing its structure, a new idea is generated. So instead of looking at each specific offer, you now use them to generate insights for you. How it works Step 1: This template can is triggered automatically when a new Gmail message arrives. Note: you need to connect your Gmail account here in this node Step 2: We use the new n8n AI Classifier Node to classify your email based on its content. You might need to update to n8n 1.94 version to make it work. Note: we like to use Gemini AI for that classifier as it's the same company as Gmail, so should be safe with data Step 3: After classifying the message, we label the message with the appropriate label. Note: you need to create the labels before in your Gmail account Step 4: For a certain category (e.g. "Business" you format the message and save it into your InfraNodus graph. *Note: specify your InfraNodus API here and choose the name of the graph. It will use the InfraNodus HTTP graphAndEntries endpoint and save your data to an InfraNodus graph. By default, we save the text knowledge graph using the contextSettings parameters (it will only build a text graph of the content), but you can take an alternative setting from this InfraNodus HTTP node's settings and create a social knowledge graph, that will also show email senders in the graph itself.* Step 5 (optional): Generate an interesting insight question with the graphAndAdvice endpoint) of InfraNodus. Step 6 (optional): Then send this insight via Telegram to a chat. Step 7 (optional): Link some important labels to the second Telegram notification node, so you receive important messages for specified labels. Step 8 (optional): Send a Telegram notification We use Telegram, because it takes only 30 seconds to set up a bot with an API (send /newbot to @botfather, unlike Discord or Slack, which is long and cumbersome to set up. You can also attach a Gmail send node and generate an email instead. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account or log in. Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Add this Authorization code in Steps 4 and 5 of the workflow. Come up with the name of the graph and change it in the HTTP InfraNodus nodes in the steps 4 and 5 and also in the Telegram node in Step 6 that sends a link to the graph. For additional text processing / idea generation settings you can use in the HTTP InfraNodus nodes, see the InfraNodus access points page. For example, in Step 4 you can change the text processing settings to build a social knowledge graph (settings are available in the Node's Notes section) and in Step 5 you can change the requestMode from question to idea to receive business ideas instead. Authorize your Gmail account for Steps 2, 3, 7 and 8 Gmail nodes. The easiest way to set it up is to open a free Google Console API account and to create an OAuth access point for n8n. You can then reuse it with other Google services like Google Sheets, Drive, etc. So it's a useful thing to have in general. Set up the Gemini AI API key using the instructions in the Step 2 Gemini AI classification node. Set up the Telegram node bot for the Step 8. It takes only 30 seconds: just go to @botfather and type in /newbot and you'll have an API key ready. To get the conversation ID, follow the n8n / Telegram instructions in the node itself. Once everything is ready, try to run the default automated workflow to test if everything works well. Requirements An InfraNodus account and API key An Google Cloud API OAuth client and key for Gmail access A Gemini AI API key A Telegram bot API key n8n version 1.94 and higher (for Text Classification AI node to work) Customizing this workflow Check our other n8n workflows at https://n8n.io/creators/infranodus/ for useful content gap analysis, expert panel, and marketing, and research workflows that utilize GraphRAG for better AI generation. Finally, check out https://infranodus.com to learn more about our network analysis technology used to build knowledge graphs from text. For support, please, contact https://support.noduslabs.com
by HoangSP
SEO Blog Generator with GPT-4o, Perplexity, and Telegram Integration This workflow helps you automatically generate SEO-optimized blog posts using Perplexity.ai, OpenAI GPT-4o, and optionally Telegram for interaction. 🚀 Features 🧠 Topic research via Perplexity sub-workflow ✍️ AI-written blog post generated with GPT-4o 📊 Structured output with metadata: title, slug, meta description 📩 Integration with Telegram to trigger workflows or receive outputs (optional) ⚙️ Requirements ✅ OpenAI API Key (GPT-4o or GPT-3.5) ✅ Perplexity API Key (with access to /chat/completions) ✅ (Optional) Telegram Bot Token and webhook setup 🛠 Setup Instructions Credentials: Add your OpenAI credentials (openAiApi) Add your Perplexity credentials under httpHeaderAuth Optional: Setup Telegram credentials under telegramApi Inputs: Use the Form Trigger or Telegram input node to send a Research Query Subworkflow: Make sure to import and activate the subworkflow Perplexity_Searcher to fetch recent search results Customization: Edit prompt texts inside the Blog Content Generator and Metadata Generator to change writing style or target industry Add or remove output nodes like Google Sheets, Notion, etc. 📦 Output Format The final blog post includes: ✅ Blog content (1500-2000 words) ✅ Metadata: title, slug, and meta description ✅ Extracted summary in JSON ✅ Delivered to Telegram (if connected) Need help? Reach out on the n8n community forum
by Jesse Davids
SSL Expiry Alert System Who is this for? This workflow is ideal for administrators or IT professionals responsible for monitoring SSL certificates of multiple websites to ensure they do not expire unexpectedly. Problem SSL certificates play a crucial role in ensuring secure communication over the internet. However, if not monitored closely, they can expire, leading to potential security risks and service disruption. This workflow helps in proactively monitoring SSL certificate expiry dates. Functionality Pulls URLs to monitor from a Google Sheet. Checks SSL certificates using SSL-Checker.io. Updates Google Sheet with SSL details such as expiry date and certificate status. Sends email alerts for SSL certificates nearing expiry (<30 days) or invalid certificates. Setup Clone the provided Google Sheet and update the Google Sheet URL in the "URLs to Monitor" node. Set up Google Sheets and Gmail credentials in n8n. Configure the Discourse Trigger for weekly monitoring. Customize email/telegram/ntfy alert settings as needed. Customization Modify the frequency of monitoring by adjusting the trigger interval in the "Weekly Trigger" node. Customize email content and recipients in the "Send Alert Email" node. Extend functionality by adding additional checks or actions based on SSL certificate status. Note Ensure proper authentication and authorization for accessing Google Sheets, SSL-Checker.io, and Gmail accounts within the workflow.
by Jimleuk
This n8n template shows you how to create an MCP server out of your existing n8n workflows. With this, any MCP client connected can get more done with powerful end-to-end workflows rather than just simple tools. Designing agent tools for outcome rather than utility has been a long recommended practice of mine and it applies well when it comes to building MCP servers; In gist, agents to be making the least amount of calls possible to complete a task. This is why n8n can be a great fit for MCP servers! This template connects your agent/MCP client (like Claude Desktop) to your existing workflows by allowing the AI to discover, manage and run these workflows indirectly. How it works An MCP trigger is used and attaches 4 custom workflow tools to discover and manage existing workflows to use and 1 custom workflow tool to execute them. We'll introduce an idea of "available" workflows which the agent is allowed to use. This will help limit and avoid some issues when trying to use every workflow such as clashes or non-production. The n8n node is a core node which taps into your n8n instance API and is able to retrieve all workflows or filter by tag. For our example, we've tagged the workflows we want to use with "mcp" and these are exposed through the tool "search workflows". Redis is used as our main memory for keeping track of which workflows are "available". The tools we have are "add Workflow", "remove workflow" and "list workflows". The agent should be able to manage this autonomously. Our approach to allow the agent to execute workflows is to use the Subworkflow trigger. The tricky part is figuring out the input schema for each but was eventually solved by pulling this information out of the workflow's template JSON and adding it as part of the "available" workflow's description. To pass parameters through the Subworkflow trigger, we can do so via the passthrough method - which is that incoming data is used when parameters are not explicitly set within the node. When running, the agent will not see the "available" workflows immediately but will need to discover them via "list" and "search". The human will need to make the agent aware that these workflows will be preferred when answering queries or completing tasks. How to use First, decide which workflows will be made visible to the MCP server. This example uses the tag of "mcp" but you can all workflows or filter in other ways. Next, ensure these workflows have Subworkflow triggers with input schema set. This is how the MCP server will run them. Set the MCP server to "active" which turns on production mode and makes available to production URL. Use this production URL in your MCP client. For Claude Desktop, see the instructions here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop. There is a small learning curve which will shape how you communicate with this MCP server so be patient and test. The MCP server will work better if there is a focused goal in mind ie. Research and report, rather than just a collection of unrelated tools. Requirements N8N API key to filter for selected workflows. N8N workflows with Subworkflow triggers! Redis for memory and tracking the "available" workflows. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow If your targeted workflows do not use the subworkflow trigger, it is possible to amend the executeTool to use HTTP requests for webhooks. Managing available workflows helps if you have many workflows where some may be too similar for the agent. If this isn't a problem for you however, feel free to remove the concept of "available" and let the agent discover and use all workflows!
by Juan Carlos Cavero Gracia
Description This automation template is designed for content creators, social media managers, and influencers who want to streamline their video publishing workflow. It automatically detects new videos uploaded to a specific Google Drive folder, generates AI-powered descriptions based on video audio content, and simultaneously publishes them across Instagram, TikTok, and YouTube while tracking everything in Airtable. Note: This workflow uses upload-post.com API (free trial no credit card required) for multi-platform video distribution and requires API tokens for each service. The AI-generated descriptions are created using OpenAI's transcription and chat models to analyze video audio content.* Who Is This For? Content Creators & Influencers:** Automatically publish your videos across all major social platforms without manual work. Social Media Managers:** Maintain consistent posting schedules across multiple platforms with AI-generated, platform-optimized descriptions. Marketing Teams:** Scale video content distribution with automated workflows that include tracking and status monitoring. Video Producers:** Focus on creating content while the system handles the tedious task of multi-platform publishing and description generation. What Problem Does This Workflow Solve? Publishing the same video content across Instagram, TikTok, and YouTube is time-consuming and repetitive. You need to manually upload each video, write unique descriptions, and track publication status. This workflow addresses these challenges by: Automated Video Distribution:** Detects new videos in Google Drive and automatically uploads them to all three platforms simultaneously. AI-Powered Content Generation:** Uses OpenAI to transcribe video audio and generate engaging, platform-appropriate descriptions automatically. Centralized Tracking:** Maintains detailed records in Airtable including upload status, URLs, and metadata for each platform. Error Monitoring:** Provides real-time error notifications via Telegram to ensure you're always aware of any issues. How It Works Video Upload Detection: The workflow monitors a specific Google Drive folder for new video uploads using automated triggers. Content Analysis: Downloads the video, extracts audio, and uses OpenAI to transcribe and generate compelling descriptions. Airtable Integration: Creates and updates records to track video metadata, descriptions, and publication status. Multi-Platform Publishing: Simultaneously uploads the video to Instagram, TikTok, and YouTube using the upload-post.com API. Status Tracking: Updates Airtable records with publication status and platform-specific URLs for each successful upload. Setup Google Drive Configuration: Set up the Google Drive trigger to monitor your specific folder Configure OAuth2 credentials for Google Drive access OpenAI Integration: Add your OpenAI API key to enable audio transcription and description generation Airtable Setup: Create an Airtable base with fields for Video Name, Description, Platform Status, URLs, and Upload Date Add your Airtable API token and configure base/table IDs in the "Set Variables" node Upload-Post.com Account: Create an account at upload-post.com to get your API token Configure the token in the HTTP request nodes for each platform Set your user ID in the variables section Platform Accounts: Ensure your Instagram, TikTok, and YouTube accounts are connected to upload-post.com Error Notifications: (Optional) Configure Telegram bot credentials for error notifications Requirements Accounts:** Google Drive, OpenAI, Airtable, upload-post.com, Telegram (optional) API Keys & Credentials:** Google Drive OAuth2, OpenAI API Key, Airtable API Token, upload-post.com API Token Platform Setup:** Instagram, TikTok, and YouTube accounts connected to upload-post.com Transform your video publishing workflow from hours of manual work to a fully automated system that handles everything from content analysis to multi-platform distribution and tracking.
by Leonardo Grigorio
Want to see it in action? Watch the full breakdown here: 📺 Video Link Template Description This n8n workflow empowers you to query structured financial data from Google Sheets or CSV files using AI-generated SQL. Unlike traditional vector database solutions that falter with numerical queries, this template leverages PostgreSQL for efficient data storage and an AI agent to dynamically create optimized SQL queries from natural language inputs. What It Does Retrieves data from Google Sheets or CSV files Infers the data schema and builds a PostgreSQL table Populates the table with your data Uses an AI agent to translate natural language questions into SQL queries Returns precise numerical results quickly and efficiently Why Use This? No SQL knowledge required—the AI generates queries for you Bypasses the inefficiencies and costs of vector database approaches Scales effortlessly without overwhelming the language model Fully free and open-source Setup Requirements Pre-Conditions PostgreSQL Database**: A running PostgreSQL instance (no specific extensions required beyond standard installation). Google Sheets Access**: A publicly accessible or shared Google Sheet URL with structured data (e.g., financial records). Need a starting point? Use this Sample Google Sheet Template. n8n Instance**: A working n8n setup with access to the Google Drive and PostgreSQL nodes. Step-by-Step Instructions Add Your Google Sheets URL Open the "Google Drive Trigger" node. Replace the placeholder URL with your Google Sheet’s link. Verify the sheet name matches your data source. Configure PostgreSQL Update the "PostgreSQL" nodes with your database credentials (host, database, user, password). The workflow automatically creates and populates the table based on your data schema. Run the Workflow Execute the workflow manually to set up the database. Once initialized, use the AI agent by asking questions like: "How much did I sell last week?" "What were the total sales for Product X in February?" (Optional) Automate Updates Add a "Schedule Trigger" node to sync your Google Sheets data with PostgreSQL on a regular basis. How It Works Schema Detection**: The workflow analyzes your Google Sheets or CSV data to infer its structure and create an appropriate PostgreSQL table. AI-Powered Queries**: An optimized AI agent converts your natural language questions into precise SQL queries, ensuring accurate results. Efficient Retrieval**: By using PostgreSQL instead of vector-based methods, this template avoids common pitfalls like slow performance or inaccurate numerical outputs. Tips for Success Ensure your Google Sheet or CSV has consistent column headers for smooth schema detection. Test with simple questions first to verify the AI agent’s query generation. Check out the n8n Template Submission Guidelines for more best practices.
by Amit Mehta
How it Works This workflow automates the collection and analysis of YouTube comments from a video and sends a summary report via email, using Google Sheets, the YouTube API, OpenAI (GPT-4o), and Gmail. Whether you're a content creator, brand manager, or social media analyst, this workflow helps you automate sentiment analysis and receive insights directly in your inbox — all triggered from a simple spreadsheet. 🎯 Use Case Ideal for: YouTubers** monitoring audience sentiment Marketing teams** analyzing campaign feedback Community managers** summarizing engagement Setup Instructions 1. Upload the Spreadsheet File name: Youtube_Video Sheet structure: | ID | Video Title | YouTube Video ID | Status | Add video IDs and set their Status as Pending 2. Configure Google Sheets Nodes Connect your Google account to: Pick Video IDs from Google Sheet Update Status on Google Sheet 3. Add API Credentials YouTube API Key** → for comment + video scraping nodes OpenAI API Key** → for analyzing comments Gmail Account** → for sending the summary email 4. Activate the Workflow Once live, the workflow will: Watch for new or updated rows in the spreadsheet Scrape comments using the YouTube API Analyze sentiment and key themes via GPT-4o Send a formatted HTML email with the summary Update the spreadsheet status to Mail sent 🔁 Workflow Logic Trigger: New/updated row in Google Sheet Retrieve: YouTube video metadata + comments Analyze: Comments using GPT-4o Email: Summary report via Gmail Update: Spreadsheet status to Mail sent 🧩 Node Descriptions | Node Name | Description | |-----------|-------------| | Pick Video IDs from Google Sheet | Watches the spreadsheet and retrieves pending video IDs | | If | Checks whether status is 'Pending' | | Limit | Restricts the number of processed rows | | Set Video Details | Prepares video info (e.g., title, channel) | | Get YouTube Video Details | Fetches metadata (title, channel, etc.) | | Get YouTube Video Comments | Pulls top-level comments using YouTube API | | Prepare Comments Data | Formats comment text for OpenAI | | AI Agent | Summarizes comments using OpenAI's GPT-4o | | Prepare HTML for Email | Converts summary into HTML for email body | | Gmail Account Configuration | Sends the email report via Gmail | | Update Status on Google Sheet | Marks the row as 'Mail sent' | 🛠️ Customization Tips Change the AI prompt for tone, length, or custom metrics Send results to Slack or Telegram instead of Gmail Export summaries to Notion, Airtable, or PDF Schedule it daily/weekly for recurring analysis 📒 Suggested Sticky Notes for Workflow | Node/Section | Sticky Note Content | |--------------|---------------------| | Pick Video IDs from Google Sheet | "Triggers on new YouTube videos in your spreadsheet" | | AI Agent | "Uses OpenAI to generate an analysis summary – customize prompt as needed" | | Gmail | "Sends summary report – you can update subject, recipients, or style" | | Update Status | "Marks video as processed to avoid duplicate runs" | 📎 Required Files | File Name | Purpose | |-----------|---------| | Youtube_Video | Google Sheet to hold YouTube video IDs and status | | Youtube_Comment_Scraper.json | Main n8n workflow export for this automation | 🧪 Testing Tips Add one test video with a valid YouTube video ID and status = Pending Monitor the workflow logs to confirm API responses Confirm summary delivery in your inbox Verify that status updates in the sheet 🏷 Suggested Tags & Categories #YouTube #OpenAI #Automation #Marketing #Email #Analytics
by Jimleuk
This n8n template lets you summarize individual team member activity on MS Teams for the past week and generates a report. For remote teams, chat is a crucial communication tool to ensure work gets done but with so many conversations happening at once and in multiple threads, ideas, information and decisions usually live in the moment and get lost just as quickly - and all together forgotten by the weekend! Using this template, this doesn't have to be the case. Have AI crawl through last week's activity, summarize all messages and replies and generate a casual and snappy report to bring the team back into focus for the current week. A project manager's dream! How it works A scheduled trigger is set to run every Monday at 6am to gather all team channel messages within the last week. Messages are grouped by user. AI analyses the raw messages and replies to pull out interesting observations and highlights. This is referred to as the individual reports. All individual reports are then combined and summarized together into what becomes the team weekly report. This allows understanding of group and similar activities. Finally, the team weekly report is posted back to the channel. The timing is important as it should be the first message of the week and ready for the team to glance over coffee. How to use Ideally works best per project and where most of the comms happens on a single channel. Avoid combining channels and instead duplicate this workflow for more channels. You may need to filter for specific team members if you want specific team updates. Customise the report to suit your organisation, team or the channel. You may prefer to be more formal if clients or external stakeholders are also present. Requirements MS Teams for chat platform OpenAI for LLM Customising this workflow If the teams channel is busy enough already, consider posting the final report to email. Pull in project metrics to include in your report. As extra context, it may be interesting to tie the messages to production performance. Use an AI Agent to query for knowledgebase or tickets relevant to the messages. This may be useful for attaching links or references to add context.
by Dhruv from Saleshandy
This n8n template captures every “Request a Demo” booking in Calendly, uses OpenAI to score and qualify leads in real time, routes them into the correct Saleshandy sequence, and logs all data in Google Sheets for full GTM visibility. Use cases include: Empowering SDR teams to focus on high-value demos Providing growth marketers with reliable funnel metrics Automating triage for B2B AE teams overwhelmed by demo requests Good to know OpenAI GPT-4 calls cost based on token usage—you can expect ~1,200 tokens per lead. Calendly API rate-limits at 180 requests/min; consider batching if volume spikes. Google Sheets writes are single-threaded; high-volume users may opt for Airtable or BigQuery. How it works Capture – Webhook node listens for every new “Request a Demo” form submission in Calendly. Score – AI Agent node sends job title, company size, domain quality, and custom questions to OpenAI; returns a 1–10 score plus label (Qualified/Semi-qualified/Unqualified). Verify meeting – HTTP Request node confirms via the Calendly API that a slot was actually scheduled. Route – Switch node selects the appropriate Saleshandy sequence ID (Qualified, Nurture, Disqualify). Send – HTTP Request nodes add each prospect to the chosen Saleshandy sequence. Log – Google Sheets nodes write to three tabs (Qualified, Semi-qualified, Unqualified) with lead data, score, routing path, and timestamp. Prerequisites n8n workspace Accounts & API credentials for: Calendly OpenAI (GPT-4 or GPT-3.5) Google Sheets Saleshandy Step-by-Step Setup 1. Import the n8n Template Upload the JSON file into your n8n workspace. 2. Add Required Credentials In n8n → Credentials, add: Calendly: Personal Access Token (PAT) OpenAI: API Key Google Sheets: OAuth2 connection Saleshandy: API Key 3. Calendly Setup Go to Calendly Webhook Docs Create a Routing Form in Calendly. Generate your access token. Use Postman or any API client to: Make a POST request to create a webhook subscription. Use your n8n webhook URL in the url field. Add your Authorization token and extract the Organization ID. Paste the webhook URL into the Calendly Routing Form. 4. Set Your Saleshandy Sequences In n8n, locate the Set: Sequence IDs node. Replace the placeholder text with: Your actual Qualified Semi-qualified and Unqualified Saleshandy sequence step IDs. 5. Configure Google Sheets Create a spreadsheet with the following tabs: Qualified Semi-qualified Unqualified In n8n, connect the three Google Sheets nodes to this file. Customising this workflow Adjust scoring logic – Modify the OpenAI prompt in the AI Agent node to weight ARR, industry, or headcount differently. Refine thresholds – Change the Switch node rules for score ranges (e.g., Qualified ≥8, Semi-qualified 5–7). Swap destinations – Edit HTTP Request nodes to integrate with your CRM or email platform instead of Saleshandy. Enhance logging – Replace Google Sheets with Airtable, BigQuery, or another analytics store. Add notifications – Insert Slack or Microsoft Teams nodes after routing to alert reps instantly.