by andsync
Who is this template for? This template is for learners, researchers, students and professionals who want to quickly capture the essence of a YouTube video. Steps in the workflow: Gets the transcript from any YouTube video through Supadata. Process the result from Supadata to one text Process the text with AI (any LLM of your choice) Final result: Produces a summary accompanied with the most important lessons and interesting facts mentioned in the video. The workflow automatically creates a new Google Doc wiht this output, in a folder of your choice on your Google Drive. (If you want to convert the markdown text to real markup after the Google Doc is created: just select all text (Ctrl-A or CMD-A), Cut the text (Ctrl-X or CMD-X and then go to Edit > Paste from Markdown.) Setup Edit your Supadata credentials in the second node (you can start for free) Choose your favourite LLM for AI processing Edit your Google Drive credentials. How to adjust it to your needs If you want the outcome to be different, edit the Prompt in "Proces transcript to summary template". The file name is a combination of ‘transcript ‘ and the date and time. You can change this to whatever you need in the Google Drive node. Supadata offers more details and options (or even translation) when working with transcripts. Check the options here: https://supadata.ai/documentation/youtube/get-transcript
by Marcelo Abreu
What this workflow does Runs automatically every Monday morning at 8 AM Collects your Meta Ads data from the last 7 days for a given account (date range is configurable) Formats the data, aggregating it at the campaign, ad set, and ad levels Generates AI-driven analysis and insights on your results, providing actionable recommendations Renders the report as a visually appealing PDF with charts and tables Sends the report via Slack (you can also add email or WhatsApp) A sample for the first page of the report: Setup Guide Create an account of pdforge and use the pre-made Meta Ads template. Connect Meta Ads, OpenAI and Slack to n8n Set your Ad Account Id and date range (choose from 'last_7d', 'last_14d', 'last30d') (opcional) Customize the scheduling date and time Requirements Meta Ads (via Facebook Graph API): Documentation pdforge access: Integration guide AI API access (e.g. via OpenAI, Anthropic, Google or Ollama) Slack acces (via OAuth2): Documentation Feel free to contact me via Linkedin, if you have any questions! 👋🏻
by Alex Kim
🎬 Google Veo 3 Prompt and Video Generator via Leonardo.ai + Claude 4 Transform text descriptions into cinematic videos using Google's Veo 3 model through Leonardo.ai's platform! 🚀 What This Workflow Does This advanced automation pipeline takes your creative ideas and turns them into professional-quality videos using Google's powerful Veo 3 model (accessed via Leonardo.ai), enhanced by Claude 4's sophisticated prompt engineering. ✨ Key Features 🤖 AI-Powered Prompt Enhancement**: Uses Claude 4 Sonnet with Wikipedia integration to craft optimal Google Veo 3 prompts 🎥 Professional Video Generation**: Leverages Google's Veo 3 model through Leonardo.ai for high-quality text-to-video conversion ☁️ Automatic Cloud Storage**: Videos are automatically saved to your Google Drive 📋 Structured Prompting**: Follows Google Veo3 best practices with 8 essential elements (Subject, Context, Action, Style, Camera Motion, Composition, Ambiance, Audio) ⚡ Hands-Off Processing**: Set it and forget it - the workflow handles the entire pipeline 🔧 How It Works Input Your Concept - Describe your video idea in the "Video Context" node AI Enhancement - Claude 4 transforms your description into a cinematic Google Veo 3 prompt using advanced techniques Video Generation - Google's Veo 3 model (via Leonardo.ai) creates your video (720p resolution, ~8 seconds) Smart Waiting - 4-minute processing buffer ensures completion Auto-Download - Retrieves the finished video from Leonardo's servers Cloud Storage - Uploads directly to your Google Drive folder 💡 Perfect For Content Creators** looking to automate video production Marketing Teams** needing quick promotional videos Educators** creating engaging visual content Social Media Managers** generating scroll-stopping content Creative Professionals** exploring AI-assisted filmmaking 📋 Requirements Leonardo AI account with API access Anthropic API key (Claude 4 Sonnet) Google Drive integration N8N instance (cloud or self-hosted) 👨💻 About the Creator Created by: AlexK1919 - AI-Native Workflow Automation Architect, n8n Ambassador and Verified Partner, Co-Founder @ WotAI If you'd like to review more Google Veo 3 Prompts organized by business category, check out over 9,000+ free, pre-made prompts at: Google Veo 3 Prompts 📄 License This workflow is available under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. You are free to use, adapt, and share this workflow for non-commercial purposes under the terms of this license. Full license details: https://creativecommons.org/licenses/by-nc-sa/4.0/ 🎯 Example Output Input: "Star Wars stormtrooper digging for uranium in desert, saying something funny" The AI generates a structured prompt with: Subject**: Detailed character description Context**: Desert environment specifics Action**: Dynamic digging movements Style**: Cinematic vlog aesthetic Camera**: Appropriate angles and movement Audio**: Dialogue, sound effects, and music ⚙️ Setup Notes Character Limit**: Prompts are optimized for Leonardo's 1,500 character API limit Processing Time**: Allow 4+ minutes for Google Veo3 video generation Quality**: 720p resolution with native audio generation Consistency**: Uses advanced Google Veo3 prompting for reliable results 🔄 Customization Options Modify the prompt engineering system message for different styles Adjust video resolution and model parameters Change storage destination (Google Drive folder) Add post-processing steps or notifications 📈 Why This Workflow Rocks Unlike simple text-to-video tools, this workflow: Intelligently enhances** your prompts using AI for Google Veo 3 Follows industry best practices** for Google Veo3 prompting Automates the entire pipeline** from idea to stored video Leverages multiple AI models** for superior results Handles technical details** like API limits and timing 🚨 Pro Tips Be specific in your initial context - detail creates better videos The workflow includes comprehensive Google Veo3 prompting guidelines Videos are typically 5-8 seconds - plan accordingly for longer content Experiment with different styles and camera movements optimized for Veo 3 The AI can access Wikipedia for factual enhancement Ready to revolutionize your video creation process? Import this workflow and start generating professional videos with just a text description! Perfect for anyone looking to harness the power of AI for content creation. Tags: #veo3 #GoogleVeo3 #AI #VideoGeneration #Leonardo #Claude #Automation #ContentCreation #GoogleAI
by Aditya Gaur
Who is this template for? This template can be used by any automator who wants to create a workitem(incident/user story/bugs) in azure devops whenever an alert raised by systems. How it works Each time an alert raised in system( for ex: Elastic raises an alert for missing host or domain). Workflow reads an alert and creates a workitem in azure devops Workflow can be customized to send any required information as possible in azure devops Setup Instructions Azure DevOps Organization and Project:** Make sure you have access to an Azure DevOps organization and a project where the work item will be created. Personal Access Token (PAT):** You need a Personal Access Token with permissions to create work items. You can generate a PAT from the Azure DevOps user settings.
by David Olusola
🎨 AI Image Editor with Form Upload + Telegram Delivery 🚀 Who’s it for? 👥 This workflow is built for content creators, social media managers, designers, and agencies who need fast, AI-powered image editing without the hassle. Whether you're batch-editing for clients or spicing up personal projects, this tool gets it done — effortlessly. What it does 🛠️ A seamless pipeline that: 📥 Accepts uploads + prompts via a clean form ☁️ Saves images to Google Drive automatically 🧠 Edits images with OpenAI’s image API 📁 Converts results to downloadable PNGs 📬 Delivers the final image instantly via Telegram Perfect for AI-enhanced workflows that need speed, structure, and simplicity. How it works ⚙️ User Uploads: Fill a form with an image + editing prompt Cloud Save: Auto-upload to your Google Drive folder AI Editing: OpenAI processes the image with your prompt Convert & Format: Image saved as PNG Telegram Delivery: Final result sent straight to your chat 💬 You’ll need ✅ 🔑 OpenAI API key 📂 Google Drive OAuth2 setup 🤖 Telegram bot token & chat ID ⚙️ n8n instance (self-hosted or cloud) Setup in 4 Easy Steps 🛠️ 1. Connect APIs Add OpenAI, Google Drive, and Telegram credentials to n8n Store keys securely (avoid hardcoding!) 2. Configure Settings Set Google Drive folder ID Add Telegram chat ID Tweak image size (default: 1024×1024) 3. Deploy the Form Add a Webhook Trigger node Test with a sample image Share the form link with users 🎯 Fine-Tune Variables In the Set node, customize: 📐 Image size 📁 Folder path 📲 Delivery options ⏱️ Timeout duration Want to customize more? 🎛️ 🖼️ Image Settings Change size (e.g. 512x512 or 2048x2048) Update the model (when new versions drop) 📂 Storage Auto-organize files by date/category Add dynamic file names using n8n expressions 📤 Delivery Swap Telegram with Slack, email, Discord Add multiple delivery channels Include image prompt or metadata in messages 📝 Form Upgrades Add fields for advanced editing Validate file types (e.g. PNG/JPEG only) Show a progress bar for long edits ⚡ Advanced Features Add error handling or retry flows Support batch editing Include approvals or watermarking before delivery ⚠️ Notes & Best Practices ✅ Check OpenAI credit balance 🖼️ Test with different image sizes/types ⏱️ Adjust timeout settings for larger files 🔐 Always secure your API keys
by Bazhard
TOTP Validation with Function Node This template allows you to verify if a 6-digit TOTP code is valid using the corresponding TOTP secret. It can be used in an authentication system. The inputs need to be: a base32 totp secret (String) a 6 digits code (String) ++Important:++ The 6-digit code must be in text format. If the code starts with zeros and is treated as a number, it could cause validation issues. The function node will generate a 6-digit code from the TOTP secret, then compare it with the provided code. If they match, it will return 1 otherwise, it will return 0. Example usage: You retrieve the user's TOTP secret from a database, then you want to verify if the 2FA code provided by the user is valid. Setup Guidelines You only need the ==TOTP VALIDATION== node. You will need to modify lines 39 and 40== of the node with the correct values for your specific context. Testing the Template You can define a sample secret and code in the EXAMPLE FIELDS node of the template, then click "Test Workflow". If the code is valid for the provided secret, the flow will proceed to the true branch of the IF CODE IS VALID node. Otherwise, it will go to the false branch.
by Jimleuk
This n8n workflow demonstrates how to automate image captioning tasks using Gemini 1.5 Pro - a multimodal LLM which can accept and analyse images. This is a really simple example of how easy it is to build and leverage powerful AI models in your repetitive tasks. How it works For this demo, we'll import a public image from a popular stock photography website, Pexel.com, into our workflow using the HTTP request node. With multimodal LLMs, there is little do preprocess other than ensuring the image dimensions fit within the LLMs accepted limits. Though not essential, we'll resize the image using the Edit image node to achieve fast processing. The image is used as an input to the basic LLM node by defining a "user message" entry with the binary (data) type. The LLM node has the Gemini 1.5 Pro language model attached and we'll prompt it to generate a caption title and text appropriate for the image it sees. Once generated, the generated caption text is positioning over the original image to complete the task. We can calculate the positioning relative to the amount of characters produced using the code node. An example of the combined image and caption can be found here: https://res.cloudinary.com/daglih2g8/image/upload/f_auto,q_auto/v1/n8n-workflows/l5xbb4ze4wyxwwefqmnc Requirements Google Gemini API Key. Access to Google Drive. Customising the workflow Not using Google Gemini? n8n's basic LLM node supports the standard syntax for image content for models that support it - try using GPT4o, Claude or LLava (via Ollama). Google Drive is only used for demonstration purposes. Feel free to swap this out for other triggers such as webhooks to fit your use case.
by Yulia
This workflow shows how to use a self-hosted Large Language Model (LLM) with n8n's LangChain integration to extract personal information from user input. This is particularly useful for enterprise environments where data privacy is crucial, as it allows sensitive information to be processed locally. 📖 For a detailed explanation and more insights on using open-source LLMs with n8n, take a look at our comprehensive guide on open-source LLMs. 🔑 Key Features Local LLM Connect Ollama to run Mistral NeMo LLM locally Provide a foundation for compliant data processing, keeping sensitive information on-premises Data extraction Convert unstructured text to a consistent JSON format Adjust the JSON schema to meet your specific data extraction needs. Error handling Implement auto-fixing for LLM outputs Include error output for further processing ⚙️ Setup and сonfiguration Prerequisites n8n AI Starter Kit installed Configuration steps Add the Basic LLM Chain node with system prompts. Set up the Ollama Chat Model with optimized parameters. Define the JSON schema in the Structured Output Parser node. 🔍 Further resources Run LLMs locally with n8n Video tutorial on using local AI with n8n Apply the power of self-hosted LLMs in your n8n workflows while maintaining control over your data processing pipeline!
by Dustin
Are you a cord-cutter? Do you find yourself looking through the many titles of videos uploaded to Youtube, just to find the ones you want to watch? Even when you subscribe to the channels you like, do you find that you want to watch the news now and my tech/n8n videos later? Well, now you can have n8n grab the last 8 videos, posted in the last 24 hours, and put them in a playlist for the day; and, each day the old playlist is deleted. Are you tired of a channel filling your subscriptions with tons of videos a day; this workflow can be used for any channel, whether you are subscribed to the channel or not. It's a YouTube playlist automation. How it works: Create your list of prefered Youtube Channels in a Google Sheet and it will create you a daily playlist; and, it will delete the playlist created yesterday. Instructions To set this up, you need to create a Google Sheet with the following headings in line 1: Channel User Name Channel Name Channel Link Channel ID Copy the 'Create your Channel List' into it's own workflow and link the Sheets links to your new sheet. To get the 'Create your Channel List' to work, you need to visit each channel's page that you want included in your playlist; you need to get the "@" name of the channel and add it to the 'Channel User Name' column of your Google Sheet. For example: if you wanted to include this channel: Recruit Training Videos - Corporal Stock, you would search for the name, to add to the next available row of the 'Channel User Name' column: @CorporalStock Once you add all Channel User Names, run the 'Create your Channel list workflow, and it will fill in the remaining details. Now the 'YT Playlist Creator' can be run. Note: The first time the workflow us run, disconnect the 'Delete Yesterday's Playlist' leg, or the workflow will error and stop (because there is no 'Yesterday's Playlist'. Note: this was made to create a playlist every day, delete yesterday's playlist, and only get the last 8 videos posted within the last 24 hours. I choose to put the date (YYMMDD format) in front of the playlist, to ensure that it doesn't conflict with another playlist. Also, I have it notifying me in Telegram, so I know that the new playlist is posted.
by merfy
Use Case Manually extracting images from PDF files for analysis is often slow and inefficient. Many users resort to taking screenshots of each page, uploading them to an AI tool like OpenAI for image analysis, and then manually copying the insights into a document. This manual process is time-consuming and prone to errors. This workflow streamlines the entire process by automatically extracting images from a PDF, analyzing them using the GPT-4o model, and saving the results in seconds—eliminating the need for manual effort. What This Workflow Does Extracts all images from the uploaded PDF file automatically The workflow scans each page of the PDF and identifies embedded images without manual intervention. Uses the GPT-4o model to analyze each extracted image Each image is processed through GPT-4o to generate descriptive insights, summaries, or context-specific analysis depending on the use case. Saves the analysis results to a .txt file, including image URLs The final output is a plain text file containing both the image URLs (e.g., hosted on cloud storage) and the corresponding GPT-4o analysis, ready for further use or sharing. Setup 1.Set up your credentials when you first open the workflow. You’ll need accounts for OpenAI, Convert API, and Google Drive. 2.Convert API does not rate-limit your API, sometimes you may receive 503 service unavailable error. Nevertheless, it doesn’t mean that you cannot convert your file. It simply means that you should retry the conversion in a few seconds. 3.Upload a PDF with images to Google Drive. 4.Remove unnecessary parts and retrieve image-related information. 5.Integrate image and image analysis information together. 6.Analyze each image using the OPENAI GPT-4o model. 7.Retrieve all image analysis content and image URL 8.Integrate multiple image URLs and analysis content 9.Output content to a .txt file. Template was created in n8n v1.83.2 How to Customize Replace the manual trigger with a Google Drive trigger or other automation triggers Change the image analysis model (e.g., switch or fine-tune GPT-4o) Send the results to other platforms (e.g., Slack, Telegram, LINE, etc.) instead of saving to a .txt file
by Jimleuk
This n8n template demonstrates how to calculate the evaluation metric "Summarization" which in this scenario, measures the LLM's accuracy and faithfulness in producing summaries which are based on an incoming Youtube transcript. The scoring approach is adapted from https://cloud.google.com/vertex-ai/generative-ai/docs/models/metrics-templates#pointwise_summarization_quality How it works This evaluation works best for an AI summarization workflows. For our scoring, we simple compare the generated response to the original transcript. A key factor is to look out information in the response which is not mentioned in the documents. A high score indicates LLM adherence and alignment whereas a low score could signal inadequate prompt or model hallucination. Requirements n8n version 1.94+ Check out this Google Sheet for a sample data https://docs.google.com/spreadsheets/d/1YOnu2JJjlxd787AuYcg-wKbkjyjyZFgASYVV0jsij5Y/edit?usp=sharing
by InfraNodus
Using the knowledge graphs instead of RAG vector stores This workflow creates a Telegram chatbot agent that has access to several knowledge bases at the same time (used as "experts"). These knowledge bases are provided using the InfraNodus GraphRAG using the knowledge graphs and providing high-quality responses without the need to set up complex RAG vector store workflows. The advantages of using GraphRAG instead of the standard vector stores for knowledge are: Easy and quick to set up and update (no complex data import workflows or vector stores needed) A knowledge graph has a holistic view of your knowledge base and knows what it's about Better retrieval of relations between the document chunks = higher quality responses How it works This template uses the n8n AI agent node as an orchestrating agent that decides which tool (knowledge graph) to use based on the user's prompt. Here's a description step by step: The user submits a question using the Telegram bot, which is then received in the n8n workflow via the Telegram trigger node. The AI agent node checks a list of tools it has access to. Each tool has a description of the knowledge it has auto-generated by InfraNodus. The AI agent decides which tool should be used to generate a response. It may reformulate user's query to be more suitable for the expert. The query is then sent to the InfraNodus HTTP node endpoint, which will query the graph that corresponds to that expert. Each InfraNodus GraphRAG expert provides a rich response that takes the whole context into account and provides a response from each expert (graph) along with a list of relevant statements retrieved using a combination or RAG and GraphRAG. The n8n AI Agent node integrates the responses received from the experts to produce the final answer. The final answer is sent back to the Telegram bot who delivers it back to the private chat or a Telegram group. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Create a separate knowledge graph for each expert (using PDF / content import options) in InfraNodus For each graph, go to the workflow, paste the name of the graph into the body name field. Keep other settings intact or learn more about them at the InfraNodus access points page. Once you add one or more graphs as experts to your flow, add the LLM key to the OpenAI node Create a Telegram bot (the instructions are in the workflow Post note) — it takes 30 seconds. Get its API key and create the Telegram credentials to use in the Telegram nodes in this workflow. Requirements An InfraNodus account and API key An OpenAI (or any other LLM) API key A Telegram account Customizing this workflow You can use this same workflow with a standard AI chatbot via a URL that can also be embedded to any website. You can also use it with ElevenLabs AI voice agent. There are many more customizations available. Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/20174217658396-Using-InfraNodus-Knowledge-Graphs-as-Experts-for-AI-Chatbot-Agents-in-n8n Also check out the video tutorial with a demo: Support If you have any questions, contact us via the support portal at https://support.noduslabs.com or via our Discord channel. More n8n workflows are available on our support portal: n8n x InfraNodus AI automation workflows.