by David Roberts
AI evaluation in n8n This is a template for n8n's evaluation feature. Evaluation is a technique for getting confidence that your AI workflow performs reliably, by running a test dataset containing different inputs through the workflow. By calculating a metric (score) for each input, you can see where the workflow is performing well and where it isn't. How it works This template shows how to calculate a workflow evaluation metric: whether a category matches the expected one. The workflow takes support tickets and generates a category and priority, which is then compared with the correct answers in the dataset. We use an evaluation trigger to read in our dataset It is wired up in parallel with the regular trigger so that the workflow can be started from either one. More info Once the category is generated by the agent, we check whether it matches the expected one in the dataset Finally we pass this information back to n8n as a metric
by Jimleuk
This n8n template demonstrates how to calculate the evaluation metric "Relevance" which in this scenario, measures the relevance of the agent's response to the user's question. The scoring approach is adapted from the open-source evaluations project RAGAS and you can see the source here https://github.com/explodinggradients/ragas/blob/main/ragas/src/ragas/metrics/_answer_relevance.py How it works This evaluation works best for Q&A agents. For our scoring, we analyse the agent's response and ask another AI to generate a question from it. This generated question is then compared to the original question using cosine similarity. A high score indicates relevance and the agent's successful ability to answer the question whereas a low score means agent may have added too much irrelevant info, went off script or hallucinated. Requirements n8n version 1.94+ Check out this Google Sheet for a sample data https://docs.google.com/spreadsheets/d/1YOnu2JJjlxd787AuYcg-wKbkjyjyZFgASYVV0jsij5Y/edit?usp=sharing
by Angel Menendez
Who is this for? This workflow is for professionals and teams who want to automate LinkedIn message replies with intelligent, human-like responses — without losing control over tone or accuracy. Ideal for founders, sales teams, DevRel, or community managers handling high-volume inbound messages. What problem is this workflow solving? Responding to every LinkedIn message manually is slow and inconsistent. Basic AI bots generate replies without context or nuance. This subworkflow solves both problems by using structured message routing from Notion and profile insights from UniPile to craft smart, context-aware responses. What this workflow does This workflow takes the sender’s message and profile (from LinkedIn Auto Message Router with Request Detection) and references your centralized Notion database of message types. It uses that to either match the message to a known response or generate a new one using OpenAI's GPT model — all while following professional tone guidelines. This is the third workflow in a 3-part automation system: Receives data from LinkedIn Auto Message Router with Request Detection Uses UniPile LinkedIn Profile Lookup Subworkflow to enrich responses based on follower count or org data Example Use Case If a message comes from someone with low reach (e.g., under 1,000 followers), the AI politely deflects a meeting request. If an influencer reaches out, the AI immediately offers a booking link. Your team controls this logic by updating the Notion database — no edits to the workflow required. Setup Connect this workflow as a subworkflow in your router or Slack approval flow Store your Notion API key and database ID in n8n Provide the following parent inputs: message – The LinkedIn message text sender – Name of the sender chatid – Session ID (optional for memory) linkedinprofile – Enriched array with LinkedIn context (follower count, connection info, etc.) Add your preferred AI model credentials (supports OpenAI, Gemini, or Ollama) Optional: Customize system prompt to better match your brand voice How to customize this workflow to your needs Update the Notion schema to include industry-specific categories or actions Change the AI tone (e.g., humorous, more corporate, etc.) Add conditional logic for auto-sending messages without Slack approval Extend to support multiple platforms (e.g., email, X/Twitter, Instagram DMs)
by Yang
🧾 What this workflow does This workflow automatically generates avatar-style videos from the latest AI-related news using Dumpling AI and HeyGen. It runs every hour, scrapes trending articles, turns them into 30–60 second spoken scripts with GPT-4o, and produces short avatar videos with HeyGen. Finally, it logs the final video URL in a Google Sheet. 👤 Who is this for Newsletters and creators who want to automate AI trend updates Content marketers generating short-form video content Product teams experimenting with AI-generated summaries Automation enthusiasts combining LLMs + video + trending data ⚙️ How to set up 🔐 Requirements Dumpling AI API Key** stored securely as HTTP Header credential HeyGen API Key** added as an HTTP Header credential OpenAI API Key** for GPT-4o (can use GPT-4o-mini if preferred) Google Sheets account** with one column: Video link 🛠 Step-by-step setup Google Sheet Setup Create a Google Sheet with a single column named: Video link Update Credentials Use n8n’s credential manager to add tokens for: Dumpling AI HeyGen OpenAI Google Sheets Optional Customizations In the "Dumpling AI: Search AI News" node, you can change "query": "AI Agent" to other trending keywords (e.g., "Generative AI", "Autonomous Agents", etc.) Update the avatar_id and voice_id in the HeyGen request to match your preferred look/sound 🧠 How it works The Schedule Trigger runs hourly. Dumpling AI searches for fresh news related to "AI Agent." The top 4 news links are scraped for full content. Articles are merged and fed into GPT-4o via a LangChain Agent to produce a casual, conversational video script. HeyGen creates a video using the script, avatar, and voice. The workflow waits until the video rendering is complete. Once done, the final video link is logged into Google Sheets. 🧪 Customization Ideas Change the interval (e.g., every 6 hours, daily) Swap avatar/voice in HeyGen to fit your brand Expand to post the video directly to social media Add image background or B-roll overlays using Creatomate This is a fast, automated pipeline to create explainer-style AI news updates using real-time data and generative video tools.
by n8n Team
This template quickly shows how to use RAG in n8n. Who is this for? This template is for everyone who wants to start giving knowledge to their Agents through RAG. Requirements Have a PDF with custom knowledge that you want to provide to your agent. Setup No setup required. Just hit Execute Workflow, upload your knowledge document and then start chatting. How to customize this to your needs Add custom instructions to your Agent by changing the prompts in it. Add a different way to load in knowledge to your vector store, e.g. by looking at some Google Drive files or loading knowledge from a table. Exchange the Simple Vector Store nodes with your own vector store tools ready for production. Add a more sophisticated way to rank files found in the vector store. For more information read our docs on RAG in n8n.
by Oliver Bardenheier
🛠️Setup Guide 'Get OVH Invoices to Google Sheets' Author: Oliver Bardenheier Who is this for? This Workflow is for all users who have services (Domains, BareMetal, VPS, Cloud, etc.) with Provider OVH.com (European API) It automatically retrieves invoice data, -files and puts the Data in a Google Spreadsheet for further processing. What problem is this workflow solving? / use case Currently the invoices from OVH do not come as an attachment via mail, it is just a link. So, the receiver has to be logged in to the ovh account to download the file. Even more effort if one is using 2FA. This workflow retrieves all information through the oauth2 token. What this workflow does This Workflow automatically retrieves invoice data, -files from Your OVH.com account and puts the Data in a Google Spreadsheet for further processing. It also saves the invoice PDF to a certain (yearly) folder in Your Google Drive. Setup Make a copy of this Google Sheet Template Set the timeframe for the query to Your likings in "Query Latest OVH Invoices" You could set an email trigger before and make the frame only one day. Log into Your OVH Account and get Your Credentials here Authentication using oAuth2 Authorization Code "Login with OVHcloud SSO" You need to Authorize OVHcloud API console If this worked fine You'll see a green text: "Access Token Received" Head over to the OVH API Console to get Your Token. Set Up Header Auth in the HTTP nodes: Authentication = Generic Credential Type Generic Auth Type = Header Auth Header Auth = Your OVH Header Credentials: -- a.) In every API Call in the console You'll find a curl example, just take the data from the line including: -H "authorization: Bearer eyJhxxxxxxxxxxxxxxxxxxxxxxxxxxxxx......" -- b.) Create a new Credential in n8n for the header auth. Put in the 'name' Field: authorization Copy Your Token including Bearer in the value field: 'Bearer eyJhxxxxxxxxxxxxxxxxxxxxxxxxxxxxx......' How to customize this workflow to your needs You can put in a mail trigger that activates on every incoming invoice mail from OVH. Adjusting the timeframe to get invoices from a certain time period, or remove the time variables completely to get ALL invoices.
by Danielle Gomes
Automatically classify incoming leads based on the sentiment of their message using Google Gemini, store them in Supabase by category, and send tailored WhatsApp messages via the official WhatsApp Cloud API. ✅ Use Case: This workflow is ideal for sales, onboarding, and customer support teams who want to: Understand the tone and urgency of each lead Prioritize hot leads instantly Send smart, automatic WhatsApp replies based on user sentiment 🧠 How it works: Capture lead via a Typeform webhook Clean and structure the data (name, email, message, etc.) Run sentiment analysis using Google Gemini to classify the message as: Positive → Hot Lead Neutral → Warm Lead Negative → Cold Lead Store lead data in Supabase under the corresponding category Merge data to unify flow paths Send WhatsApp message using the official WhatsApp Cloud API, with a custom reply for each sentiment result 🔧 Tools used: Typeform (incoming data) Google Gemini (AI-based sentiment classification) Supabase (database) WhatsApp Cloud API (response automation) 🏷 Tags: AI, Sentiment Analysis, Lead Qualification, Supabase, WhatsApp, Gemini, Typeform, CRM, Automation, Customer Engagement
by Alex Kim
🎬 Google Veo 3 Prompt and Video Generator via Leonardo.ai + Claude 4 Transform text descriptions into cinematic videos using Google's Veo 3 model through Leonardo.ai's platform! 🚀 What This Workflow Does This advanced automation pipeline takes your creative ideas and turns them into professional-quality videos using Google's powerful Veo 3 model (accessed via Leonardo.ai), enhanced by Claude 4's sophisticated prompt engineering. ✨ Key Features 🤖 AI-Powered Prompt Enhancement**: Uses Claude 4 Sonnet with Wikipedia integration to craft optimal Google Veo 3 prompts 🎥 Professional Video Generation**: Leverages Google's Veo 3 model through Leonardo.ai for high-quality text-to-video conversion ☁️ Automatic Cloud Storage**: Videos are automatically saved to your Google Drive 📋 Structured Prompting**: Follows Google Veo3 best practices with 8 essential elements (Subject, Context, Action, Style, Camera Motion, Composition, Ambiance, Audio) ⚡ Hands-Off Processing**: Set it and forget it - the workflow handles the entire pipeline 🔧 How It Works Input Your Concept - Describe your video idea in the "Video Context" node AI Enhancement - Claude 4 transforms your description into a cinematic Google Veo 3 prompt using advanced techniques Video Generation - Google's Veo 3 model (via Leonardo.ai) creates your video (720p resolution, ~8 seconds) Smart Waiting - 4-minute processing buffer ensures completion Auto-Download - Retrieves the finished video from Leonardo's servers Cloud Storage - Uploads directly to your Google Drive folder 💡 Perfect For Content Creators** looking to automate video production Marketing Teams** needing quick promotional videos Educators** creating engaging visual content Social Media Managers** generating scroll-stopping content Creative Professionals** exploring AI-assisted filmmaking 📋 Requirements Leonardo AI account with API access Anthropic API key (Claude 4 Sonnet) Google Drive integration N8N instance (cloud or self-hosted) 👨💻 About the Creator Created by: AlexK1919 - AI-Native Workflow Automation Architect, n8n Ambassador and Verified Partner, Co-Founder @ WotAI If you'd like to review more Google Veo 3 Prompts organized by business category, check out over 9,000+ free, pre-made prompts at: Google Veo 3 Prompts 📄 License This workflow is available under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. You are free to use, adapt, and share this workflow for non-commercial purposes under the terms of this license. Full license details: https://creativecommons.org/licenses/by-nc-sa/4.0/ 🎯 Example Output Input: "Star Wars stormtrooper digging for uranium in desert, saying something funny" The AI generates a structured prompt with: Subject**: Detailed character description Context**: Desert environment specifics Action**: Dynamic digging movements Style**: Cinematic vlog aesthetic Camera**: Appropriate angles and movement Audio**: Dialogue, sound effects, and music ⚙️ Setup Notes Character Limit**: Prompts are optimized for Leonardo's 1,500 character API limit Processing Time**: Allow 4+ minutes for Google Veo3 video generation Quality**: 720p resolution with native audio generation Consistency**: Uses advanced Google Veo3 prompting for reliable results 🔄 Customization Options Modify the prompt engineering system message for different styles Adjust video resolution and model parameters Change storage destination (Google Drive folder) Add post-processing steps or notifications 📈 Why This Workflow Rocks Unlike simple text-to-video tools, this workflow: Intelligently enhances** your prompts using AI for Google Veo 3 Follows industry best practices** for Google Veo3 prompting Automates the entire pipeline** from idea to stored video Leverages multiple AI models** for superior results Handles technical details** like API limits and timing 🚨 Pro Tips Be specific in your initial context - detail creates better videos The workflow includes comprehensive Google Veo3 prompting guidelines Videos are typically 5-8 seconds - plan accordingly for longer content Experiment with different styles and camera movements optimized for Veo 3 The AI can access Wikipedia for factual enhancement Ready to revolutionize your video creation process? Import this workflow and start generating professional videos with just a text description! Perfect for anyone looking to harness the power of AI for content creation. Tags: #veo3 #GoogleVeo3 #AI #VideoGeneration #Leonardo #Claude #Automation #ContentCreation #GoogleAI
by Pavel Zamorev
This n8n template automates the transformation of raw meeting notes into structured tasks and documents using GPT (or another model) , syncing them to Notion and TickTick via a Telegram bot. Use Cases Automate note-taking and formatting for daily standups, brainstorming sessions, or client calls. Reduce cognitive load by eliminating manual tracking of ideas and tedious formatting. Convert discussions into actionable tasks instantly with TickTick and structured notes in Notion. How It Works Capture Notes: Send raw meeting notes to a Telegram bot. AI Processing: The workflow sends the text to AI, which: Removes duplicates and extracts key points. Formats content into structured Markdown notes for Notion. Identifies tasks with deadlines (e.g., "- Prepare presentation (Responsible: John, Deadline: Friday)"). Task Parsing: Extracts task titles, removing metadata like "Responsible" and "Deadline." Review & Edit: The bot returns formatted notes and tasks for review in Telegram. Sync & Publish: Notes are published to a Notion database. Tasks are exported to TickTick via API. Confirmation: A Telegram reaction (e.g., 👌 emoji) confirms successful processing. Setup Instructions Set Up Telegram Bot: Create a Telegram bot via BotFather and obtain an API token. Add the token to the "Telegram Trigger" and "Send-Edited-Notes" nodes under credentials (telegramApi). Configure OpenAI: Obtain an OpenAI API key and add it to the "Edit-Notes" node (openAiApi credentials). Ensure the model is set to gpt-4.1-mini in the node parameters. Set Up Notion: Create a Notion database for notes (e.g., "Meetings"). Add the database ID to the "Create a Database Page" node (databaseId). Configure Notion API credentials (notionApi) in the node. Set Up TickTick: Obtain a TickTick API key and add it to the "Create a Task" node (tickTickOAuth2Api credentials). Specify your TickTick project ID in the node (projectId). Deploy Workflow: Ensure your n8n instance is self-hosted to support community nodes (TickTick, Notion). Activate the workflow in n8n. Test: Send a test message to the Telegram bot (e.g., "Discussed project timeline. Tasks: - Prepare slides (Responsible: Alice, Deadline: Friday)"). Verify that notes appear in Notion, tasks in TickTick, and a 👌 reaction in Telegram. Configuration Examples Telegram Trigger: { "parameters": { "updates": ["message"], "additionalFields": {} }, "credentials": { "telegramApi": { "id": "your-telegram-api-id", "name": "meeting notes" } } } OpenAI Prompt (in "Edit-Notes" node): Analyze the quick meeting notes from {{ $json.message.text }} Generate meeting notes and a task list in the following format:\nMeeting Notes:\n- [Note 1]\n- [Note 2]\n\nTasks:\n- [Task 1] \n- [Task 2] Notion Database Page { "parameters": { "resource": "databasePage", "databaseId": "your-notion-database-id", "title": "MN {{ $now }}", "blockUi": { "blockValues": [ { "textContent": "{{ $json.message.text }}" } ] } } } Requirements Requires an OpenAI API key (or another model). APIs: Pre-configured Notion and TickTick API credentials are required. The template includes setup guides. Setup: Uses community nodes, requiring a self-hosted n8n instance. Customizing This Workflow Replace the Telegram bot with a webhook or form for alternative inputs (e.g., mobile apps). Modify the OpenAI prompt in the "Edit-Notes" node to customize note and task formats. Add filters in the "Split Notes and Tasks" node to prioritize tasks (e.g., ++#urgent++). Integrate Google Calendar via an additional HTTP Request node to auto-set deadlines based on text (e.g., "by Friday").
by Adam Janes
How it works The automation loads rows from a Google Sheet of leads that you want to contact. It makes a Google search via Apify for LinkedIn links based on the First name / Last name / Company. Another Apify actor fetches the right LinkedIn profile based on the first profile which is retuned The same process is done for the company that the lead works for, giving extra context. If the lead has a current company listed on their LinkedIn, we use that URL to do the lookup, rather than doing a separate Google search. A call is made to OpenRouter to get an LLM to generate an email based on a prompt designed to do personalized outreach. An email is sent via a Gmail node. Set up steps Connect your Google Sheets + Gmail accounts to use these APIs. Make an account with Apify and enter your credentials. Set your details in the "Set My Data" node to customize the workflow to revolve around your company + value proposition. I would recommend changing the prompt in the "Generate Personalized Email" node to match the tone of voice that you want your agent to have. You can change the guidelines to e.g. change whether the agent introduces itself, and give more examples in the style you want to make the output better.
by Ranjan Dailata
Who this is for? Extract & Summarize Yelp Business Review is an automated workflow that extracts the Yelp business reviews using Bright Data Web Unlocker, process and formats the raw data, summarizes using the Google Gemini's LLM, and forward the concise summary with the review respose to a specified webhook endpoint. This workflow is tailored for: Local SEO Specialists who need structured insights from Yelp reviews to optimize listings. Business Owners wanting quick summaries of what customers love or complain about. Reputation Managers who monitor brand sentiment and identify customer pain points. Data Analysts & Researchers extracting Yelp review patterns at scale. AI Product Builders needing clean Yelp review data as input for their LLMs or recommender systems. What problem is this workflow solving? Yelp reviews are rich in customer sentiment but messy to work with manually. This workflow solves: The pain of scraping Yelp review content manually. The challenge of building the structured data with the summary. The need for structured outputs suitable for analysis, reports, or AI input. What this workflow does This automated pipeline does the following: Bright Data Integration**: Queries Yelp and scrapes business listing data using Bright Data's Web Unlocker. Structured Data Formatting**: Formats the Yelp review data to a structured response in JSON format. Google Gemini Summarization**: Sends the cleaned reviews to Google Gemini to: Output Delivery**: Returns the structured response with the concise summary over the webhook endpoint. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Yelp Business Review URL with the Bright Data zone by navigating to the Set Yelp URL with the Bright Data Zone node. Update the Webhook Notifier for the merged response node with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you’re a market researcher, entrepreneur, or data analyst. Here's how you can adapt it to fit your specific use case: Target Specific Business Categories** Update the Yelp Business Review input to scrape different businesses like gyms, salons etc. Limit Reviews** Add filters by description, location, page range to get the top reviews. Tweak the Data Extraction Node** Update the Structured Data Extractor node Output Parser for building the JSON response with the appropriate fields or attributes. Tweak the Summarization Prompt** Modify the Gemini prompt to generate a comprehensive summary. Send Output to Other Destinations** Replace the Webhook URL to forward output to: Google Sheets Airtable Slack or Discord Custom API endpoints
by Jimleuk
This n8n workflow demonstrates an approach to parsing bank statement PDFs with multimodal LLMs as an alternative to traditional OCR. This allows for much more accurate data extraction from the document especially when it comes to tables and complex layouts. Multimodal Parsing is better than traditiona OCR because: It reduces complexity and overhead by avoiding the need to preprocess the document into text format such as markdown before passing to the LLM. It handles non-standard PDF formats which may produce garbled output via traditional OCR text conversion. It's orders of magnitude cheaper than premium OCR models that still require post-processing cleanup and formatting. LLMs can format to any schema or language you desire! How it works You can use the example bank statement created specifically for this workflow here: https://drive.google.com/file/d/1wS9U7MQDthj57CvEcqG_Llkr-ek6RqGA/view?usp=sharing A PDF bank statement is imported via Google Drive. For this demo, I've created a mock bank statement which includes complex table layouts of 5 columns. Typically, OCR will be unable to align the columns correctly and mistake some deposits for withdrawals. Because multimodal LLMs do not accept PDFs directly, well have to convert the PDF to a series of images. We can achieve this by using a tool such as Stirling PDF. Stirling PDF is self-hostable which is handy for sensitive data such as bank statements. Stirling PDF will return our PDF as a series of JPGs (one for each page) in a zipped file. We can use n8n's decompress node to extract the images and ensure they are ordered by using the Sort node. Next, we'll resize each page using the Edit Image node to ensure the right balance between resolution limits and processing speed. Each resized page image is then passed into the Basic LLM node which will use our multimodal LLM of choice - Gemini 1.5 Pro. In the LLM node's options, we'll add a "user message" of type binary (data) which is how we add our image data as an input. Our prompt will instruct the multimodal LLM to transcribe each page to markdown. Note, you do not need to do this - you can just ask for data points to extract directly! Our goal for this template is to demonstrate the LLMs ability to accurately read the page. Finally, with our markdown version of all pages, we can pass this to another LLM node to extract required data such as deposit line items. Requirements Google Gemini API for Multimodal LLM. Google Drive access for document storage. Stirling PDF instance for PDF to Image conversion Customising the workflow At time of writing, Gemini 1.5 Pro is the most accurate in text document parsing with a relatively low cost. If you are not using Google Gemini however you can switch to other multimodal LLMs such as OpenAI GPT or Antrophic Claude. If you don't need the markdown, simply asking what to extract directly in the LLM's prompt is also acceptable and would save a few extra steps. Not parsing any bank statements any time soon? This template also works for Invoices, inventory lists, contracts, legal documents etc.