Evaluation Metric: Summarization
This n8n template demonstrates how to calculate the evaluation metric "Summarization" which in this scenario, measures the LLM's accuracy and faithfulness in producing summaries which are based on an incoming Youtube transcript.
The scoring approach is adapted from https://cloud.google.com/vertex-ai/generative-ai/docs/models/metrics-templates#pointwise_summarization_quality
How it works This evaluation works best for an AI summarization workflows. For our scoring, we simple compare the generated response to the original transcript. A key factor is to look out information in the response which is not mentioned in the documents. A high score indicates LLM adherence and alignment whereas a low score could signal inadequate prompt or model hallucination.
Requirements n8n version 1.94+ Check out this Google Sheet for a sample data https://docs.google.com/spreadsheets/d/1YOnu2JJjlxd787AuYcg-wKbkjyjyZFgASYVV0jsij5Y/edit?usp=sharing
Related Templates
Track Demo Bookings with Google Calendar to Meta Conversions API Integration
Who is this workflow for? If you're using Meta Ads to generate new leads to your sales pipeline, this workflow is for yo...
Build a PDF-Based RAG System with OpenAI, Pinecone and Cohere Reranking
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow prov...
Reusable and Independently Testable Sub-workflow
Reusable and Independently Testable Sub-workflow This n8n workflow provides a standardized structure for building and te...
🔒 Please log in to import templates to n8n and favorite templates
Workflow Visualization
Loading...
Preparing workflow renderer
Comments (0)
Login to post comments