RAGChain Docs
  • Introduction
  • Quick Start
  • Installation
  • RAGchain Structure
    • File Loader
      • Dataset Loader
        • Ko-Strategy-QA Loader
      • Hwp Loader
      • Rust Hwp Loader
      • Win32 Hwp Loader
      • OCR
        • Nougat Loader
        • Mathpix Markdown Loader
        • Deepdoctection Loader
    • Text Spliter
      • Recursive Text Splitter
      • Markdown Header Splitter
      • HTML Header splitter
      • Code splitter
      • Token splitter
    • Retrieval
      • BM25 Retrieval
      • Hybrid Retrieval
      • Hyde Retrieval
      • VectorDB Retrieval
    • LLM
    • DB
      • MongoDB
      • Pickle DB
    • Reranker
      • BM25 Reranker
      • UPR Reranker
      • TART Reranker
      • MonoT5 Reranker
      • LLM Reranker
    • Benchmark
      • Auto Evaluator
      • Dataset Evaluators
        • Qasper
        • Ko-Strategy-QA
        • Strategy-QA
        • ms-marco
  • Utils
    • Query Decomposition
    • Evidence Extractor
    • Embedding
    • Slim Vector Store
      • Pinecone Slim
      • Chroma Slim
    • File Cache
    • Linker
      • Redis Linker
      • Dynamo Linker
      • Json Linker
    • REDE Search Detector
    • Semantic Clustering
  • Pipeline
    • BasicIngestPipeline
    • BasicRunPipeline
    • RerankRunPipeline
    • ViscondeRunPipeline
  • For Advanced RAG
    • Time-Aware RAG
    • Importance-Aware RAG
Powered by GitBook
On this page
  • Overview
  • Example Use
  1. RAGchain Structure
  2. Benchmark

Auto Evaluator

PreviousBenchmarkNextDataset Evaluators

Last updated 1 year ago

Overview

The AutoEvaluator class in the RAGchain framework is used for evaluating metrics without the need for ground truths. You only need to pass the questions and your pipeline for evaluation. It will evaluate context precision, which measures retrieval performance. And answer relevancy and faithfulness can measure answer generation performance of your pipeline. It uses metrics for evaluation without ground truths.

Example Use

from RAGchain.benchmark import AutoEvaluator

pipeline = <your pipeline>
questions: list[str] = <your list of questions>

evaluator = AutoEvaluator(pipeline, questions)
result = evaluator.evaluate()

# print result summary (mean values)
print(result.results)

# print result DataFrame
print(result.each_results)
ragas