by Jimleuk
This n8n template demonstrates how to calculate the evaluation metric "Similarity" which in this scenario, measures the consistency of the agent. The scoring approach is adapted from the open-source evaluations project RAGAS and you can see the source here https://github.com/explodinggradients/ragas/blob/main/ragas/src/ragas/metrics/_answer_similarity.py How it works This evaluation works best where questions are close-ended or about facts where the answer can have little to no deviation. For our scoring, we generate embeddings for both the AI's response and ground truth and calculate the cosine similarity between them. A high score indicates LLM consistency with expected results whereas a low score could signal model hallucination. Requirements n8n version 1.94+ Check out this Google Sheet for a sample data https://docs.google.com/spreadsheets/d/1YOnu2JJjlxd787AuYcg-wKbkjyjyZFgASYVV0jsij5Y/edit?usp=sharing
by Jimleuk
This n8n template demonstrates how to calculate the evaluation metric "Summarization" which in this scenario, measures the LLM's accuracy and faithfulness in producing summaries which are based on an incoming Youtube transcript. The scoring approach is adapted from https://cloud.google.com/vertex-ai/generative-ai/docs/models/metrics-templates#pointwise_summarization_quality How it works This evaluation works best for an AI summarization workflows. For our scoring, we simple compare the generated response to the original transcript. A key factor is to look out information in the response which is not mentioned in the documents. A high score indicates LLM adherence and alignment whereas a low score could signal inadequate prompt or model hallucination. Requirements n8n version 1.94+ Check out this Google Sheet for a sample data https://docs.google.com/spreadsheets/d/1YOnu2JJjlxd787AuYcg-wKbkjyjyZFgASYVV0jsij5Y/edit?usp=sharing
by Roninimous
This n8n workflow leverages a Telegram Message Trigger to activate an intelligent AI Agent capable of processing both text and voice messages. When a user sends a message in text or in voice format, the workflow captures and transcribes it (if necessary), then passes it to the AI Agent for understanding and response generation. To enhance user experience, the bot also displays a typing indicator while processing requests, simulating a natural, human-like interaction. Key Features Multi-Modal Input: Supports both text messages and voice notes from users. Real-Time Interaction: Shows a “typing…” action in Telegram while the AI processes the input. AI Agent Integration: Provides intelligent, context-aware, and conversational responses. Seamless Feedback Loop: Replies are sent directly back to the user within Telegram for smooth interaction. How It Works The workflow triggers whenever a message or voice note is received on Telegram. If the input is a voice note, the workflow transcribes it into text. The text input is sent to the AI Agent for processing. While processing, the bot sends a typing indicator to the user. Once the AI generates a response, the workflow sends it back to the user in Telegram. Setup Instructions Create a Telegram Bot: Use @BotFather to create a bot and obtain your bot token. Configure n8n Credentials: Add Telegram API credentials in n8n with your bot token. Add credentials for any speech-to-text service used for voice transcription (e.g., Open AI Transcribe A Recording). Import the Workflow: Import this workflow into your n8n instance. Update all credential nodes to use your Telegram and transcription service credentials. Set Webhook URLs: Ensure Telegram webhook is set properly for your bot to receive messages. Make sure your n8n instance is publicly accessible for Telegram callbacks. Test the Workflow: Send text messages and voice notes to your Telegram bot and observe the AI responses. Customization Guidance Add new message handlers: Extend the workflow to handle additional message types (images, documents, etc.). Improve transcription: Swap or add speech-to-text services for better accuracy or language support. Enhance AI Agent: Customize prompts and context management to tailor the AI’s personality and responses. AI Model Flexibility: Swap between different AI models (e.g., GPT-4, Claude, or custom LLMs) based on task type, cost, or performance preferences. Tool-Based Control: Add custom tools to the AI Agent such as calendar access, Notion, Google Sheets, web search, database queries, or custom APIs—allowing for dynamic, multi-functional agents Security and Implementation Notes The Telegram node manages message reception and sending but does not directly handle AI processing. Voice transcription requires integration with external APIs; secure those credentials in n8n and monitor usage. To simulate typing, the workflow uses Telegram’s “sendChatAction” API method, providing users with feedback that the bot is processing. Ensure your AI API keys and Telegram tokens are securely stored in n8n credentials and not exposed in workflows or logs. Benefits Handles natural conversational inputs with text or voice. Provides a smooth, engaging user experience via typing indicators. Easy integration of advanced AI conversational agents with Telegram. Flexible for personal assistants, helpdesks, or interactive chatbots.
by Adam Janes
This workflow demonstrates a simple way to run evals on a set of test cases stored in a Google Sheet. The example we are using comes from an info extraction task dataset, where we tested 6 different LLMs on 18 different test cases. You can see our sample data in this spreadsheet here to get started. Once you have this working for our dataset, you can plug in your own test cases matching different LLMs to see how it works with your own data. How it works: It loads test cases from Google Sheets. For each row in our Google Sheet, it grabs the source document, converting it to text. Our "LLM judge" passes the input/output of each LLM to GPT-4.1 to evaluate each test case (Pass/Fail + Reason). It logs the outcome to a Google Sheet. A 0.5s pause between each request gets around OpenAI's API rate limits. Set up steps: Add your credentials for Google Sheets, Google Drive, and OpenRouter. Make a copy of the original data spreadsheet so that you can edit it yourself. You will need to plug your version in the Update Results node to see the spreadsheet update on each run of the loop.
by Ranjan Dailata
Who this is for? Google SERP Tracker + Trends and Recommendations is an AI-powered n8n workflow that extracts Google search results via Bright Data, parses them into structured JSON using Google Gemini, and generates actionable recommendations and search trends. It outputs CSV reports and sends real-time Webhook notifications. This workflow is ideal for: SEO Agencies needing automated rank & trend tracking Growth Marketers seeking daily/weekly search-based insights Product Teams monitoring brand or competitor visibility Market Researchers performing search behavior analysis No-code Builders automating search intelligence workflows What problem is this workflow solving? Traditional tracking of search engine rankings and search trends is often fragmented and manual. Analyzing SERP changes and trends requires: Manual extraction or using unstable scrapers Unstructured or cluttered HTML data Lack of actionable insights or recommendations This workflow solves the problem by: Automating real-time Google SERP data extraction using Bright Data Structuring unstructured search data using Google Gemini LLM Generating actionable recommendations and trends Exporting both CSV reports automatically to disk for downstream use Notifying external systems via Webhook What this workflow does Accepts search input, zone name, and webhook notification URL Uses Bright Data to extract Google Search Results Uses Google Gemini LLM to parse the SERP data into structured JSON Loops over structured results to: Extract recommendations Extract trends Saves both as .csv files (example below): Google_SERP_Recommendations_Response_2025-06-10T23-01-50-650Z.csv Google_SERP_Trends_Response_2025-06-10T23-01-38-915Z.csv Sends a Webhook with the summary or file reference LLM Usage Google Gemini LLM handles: Parsing Google Search HTML into structured JSON Summarizing recommendation data Deriving trends from the extracted SERP metadata Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Set input fields with the search criteria, Bright Data Zone name, Webhook notification URL. How to customize this workflow to your needs Input Customization Set your target keyword/phrase in the search field Add your webhook_notification_url for external triggers or notifications SERP Source You can extend the Bright Data search logic to include other engines like Bing or DuckDuckGo. Output Format Edit the .csv structure in the Convert to File nodes if you want to include/exclude specific columns. LLM Prompt Tuning The Gemini LLM prompt inside the Recommendation or Trends extractor nodes can be fine-tuned for domain-specific insight (e.g., SEO vs eCommerce focus).
by Tharwat Mohamed
Document-Aware WhatsApp AI Bot for Customer Support Google Docs-Powered WhatsApp Support Agent 24/7 WhatsApp AI Assistant with Live Knowledge from Google Docs 📝Description Template Smart WhatsApp AI Assistant Using Google Docs Help customers instantly on WhatsApp using a smart AI assistant that reads your company’s internal knowledge from a Google Doc in real time. Built for clubs, restaurants, agencies, or any business where clients ask questions based on a policy, FAQ, or services document. ⚙️ How it works Users send free-form questions to your WhatsApp Business number (e.g. “What are the gym rules?” or “Are you open today?”) The bot automatically reads your company’s internal Google Doc (policy, schedule, etc.) It merges the document content with today’s date and the user’s question to craft a custom AI prompt The AI (Gemini or ChatGPT) then replies back on WhatsApp using natural, helpful language All conversations are logged to Google Sheets for reporting or audit > 💡Bonus: The AI even understands dates inside the document and compares them to today’s date — e.g. if your document says “Closed May 25 for 30 days,” it will say “We're currently closed until June 24. 🧰 Set up steps Connect your WhatsApp Cloud API account (Meta) Add your Google account and grant access to the Doc containing your company info Choose your AI model (ChatGPT/OpenAI or Gemini) Paste your document ID into the Google Docs node Connect your WhatsApp webhook to Meta (only takes 5 minutes) Done — start receiving and answering customer questions! > 📄 Works best with free-tier OpenAI/Gemini, Google Docs, and Meta's Cloud API (no phone required). Everything is modular, extensible, and low-code. 🔄 Customization Tips Change the Google Doc anytime to update answers — no retraining needed Add your logo and business name in the AI agent’s “System Prompt” Add fallback routes like “Escalate to human” if the bot can't help Clone for multiple brands by duplicating the workflow and swapping in new docs 🤝 Need Help Setting It Up? If you'd like help connecting your WhatsApp Business API, setting up Google Docs access, or customizing this AI assistant for your business or clients… 📩 I offer setup, branding, and customization services: WhatsApp Cloud API setup & verification Google OAuth & Doc structure guidance AI model configuration (OpenAI / Gemini) Branding & prompt tone customization Logging, reporting, and escalation logic Just send a message via: Email: tharwat.elsayed2000@gmail.com WhatsApp: +20 106 180 3236
by Adam Janes
This workflow demonstrates a simple way to run evals on a set of test cases stored in a Google Sheet. The example we are using comes from an info extraction task dataset, where we tested 6 different LLMs on 18 different test cases. This workflow extends the functionality of my simple eval for benchmarking legal tasks here. Rather than running executions sequentially (waiting for each one to respond before making another request), we use parallel processing to fire 2 requests every second. You can see our sample data in this spreadsheet here to get started. Once you have this working for our dataset, you can plug in your own test cases matching different LLMs to see how it works with your own data. How it works Pull our test cases from Google Sheets. For each case, fire off an HTTP request to a webhook. That webhook grabs the relevant source file from Google Drive and converts it to text. The text gets sent to an LLM via Open Router (so we can easily swap out models). Results come back and are logged in Google Sheets. Set up steps: Add your credentials for Google Sheets, Google Drive, and OpenRouter. Make a copy of the original data spreadsheet so that you can edit it yourself. You will need to plug your version in the Update Results node to see the spreadsheet update on each run of the loop.
by n8n Team
This template quickly shows how to use RAG in n8n. Who is this for? This template is for everyone who wants to start giving knowledge to their Agents through RAG. Requirements Have a PDF with custom knowledge that you want to provide to your agent. Setup No setup required. Just hit Execute Workflow, upload your knowledge document and then start chatting. How to customize this to your needs Add custom instructions to your Agent by changing the prompts in it. Add a different way to load in knowledge to your vector store, e.g. by looking at some Google Drive files or loading knowledge from a table. Exchange the Simple Vector Store nodes with your own vector store tools ready for production. Add a more sophisticated way to rank files found in the vector store. For more information read our docs on RAG in n8n.
by andsync
Who is this template for? This template is for learners, researchers, students and professionals who want to quickly capture the essence of a YouTube video. Steps in the workflow: Gets the transcript from any YouTube video through Supadata. Process the result from Supadata to one text Process the text with AI (any LLM of your choice) Final result: Produces a summary accompanied with the most important lessons and interesting facts mentioned in the video. The workflow automatically creates a new Google Doc wiht this output, in a folder of your choice on your Google Drive. (If you want to convert the markdown text to real markup after the Google Doc is created: just select all text (Ctrl-A or CMD-A), Cut the text (Ctrl-X or CMD-X and then go to Edit > Paste from Markdown.) Setup Edit your Supadata credentials in the second node (you can start for free) Choose your favourite LLM for AI processing Edit your Google Drive credentials. How to adjust it to your needs If you want the outcome to be different, edit the Prompt in "Proces transcript to summary template". The file name is a combination of ‘transcript ‘ and the date and time. You can change this to whatever you need in the Google Drive node. Supadata offers more details and options (or even translation) when working with transcripts. Check the options here: https://supadata.ai/documentation/youtube/get-transcript
by Stefan
Automate LinkedIn engagement without sounding like a bot. This workflow: 🌍 Detects language & tone (German / English) 👍 Chooses the right reaction (like / celebrate / support …) 🗣 Generates a personalised comment in your voice and mentions the author 📲 Optional Telegram review – approve ✅ or regenerate ❌ before posting 💸 Runs on cost-efficient GPT-4o mini or Claude 3.5 Haiku ☁️ Publishes comment + reaction via the Unipile API Setup (≈ 15-30 min) Unipile – connect LinkedIn → copy account_id, dsn, then create an Access-Token (X-API-KEY). Telegram (optional) – create a bot, add a credential named YOUR TELEGRAM ACCOUNT. OpenAI / Anthropic – add your API key and keep one LLM node (delete the other). Open the “Defining guardrails” node and replace the credential placeholders. (Optional) Tweak role, comment_length and openers_example_1-3 for your brand voice. Security: no live keys included – all secrets are placeholders. Best for: solopreneurs, marketing teams, personal-branding consultants.
by Alex
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How It Works This template orchestrates a multi-step workflow that constructs a comprehensive four-zone automation matrix—Green, Yellow, Red, and White—grounded in the Human Agency Scale (HAS). When a user sends a job title via Telegram, the workflow routes both text and voice messages appropriately. Voice messages are transcribed via OpenAI's Whisper, while text inputs bypass transcription. Both streams merge into a single data flow. The AI Agent node, powered by GPT-4, analyzes the user's profession and core tasks. It also leverages live context by calling the Tavily search tool, ensuring the analysis incorporates up-to-date information. After the evaluation, the workflow formats and returns the completed matrix, with detailed task examples and rationales for each zone, back to the user via Telegram. Setup Instructions Create an OpenAI credential in n8n (model: GPT-4.1 mini). Add a Tavily credential with your API key (FREE plan available). Configure a Telegram Bot credential: API bot token. Import this JSON as a new workflow in n8n and map credentials in each node. Activate the workflow; test by sending sample job titles; adjust node timeouts and webhook settings as needed. Requirements n8n v1.0.0 or higher Active OpenAI API key (GPT-4.1 mini access) Tavily API key for web context search Telegram Bot token with correctly configured webhook Stable internet connectivity Audience & Problem This template is designed for consultants, HR professionals, and analysts who need a scalable, standardized approach to evaluate which routine tasks in a given profession can be automated, which require human oversight, and which should remain manual to preserve strategic judgment, creativity, and expertise.
by Yaron Been
Automated monitoring system that tracks startup activities, funding events, and company updates in real-time, providing valuable market intelligence. 🚀 What It Does Real-time monitoring of startup activities Funding alerts and updates Competitor tracking Industry trend analysis Customizable watchlists 🎯 Perfect For Venture capitalists Startup founders Business development teams Market researchers Investment analysts ⚙️ Key Benefits ✅ Stay ahead of market movements ✅ Never miss important funding rounds ✅ Track competitor activities ✅ Identify emerging trends ✅ Save hours of manual research 🔧 What You Need Crunchbase API access n8n instance Notification preferences (email/Slack/Teams) 📊 Data Points Tracked New funding rounds Company updates Leadership changes Product launches Market expansions 🛠️ Setup & Support Quick Setup Deploy in 20 minutes with our step-by-step configuration guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Stay informed about the startup ecosystem with automated monitoring and alerts. Make data-driven decisions with timely, relevant information.