RAGChain Docs
  • Introduction
  • Quick Start
  • Installation
  • RAGchain Structure
    • File Loader
      • Dataset Loader
        • Ko-Strategy-QA Loader
      • Hwp Loader
      • Rust Hwp Loader
      • Win32 Hwp Loader
      • OCR
        • Nougat Loader
        • Mathpix Markdown Loader
        • Deepdoctection Loader
    • Text Spliter
      • Recursive Text Splitter
      • Markdown Header Splitter
      • HTML Header splitter
      • Code splitter
      • Token splitter
    • Retrieval
      • BM25 Retrieval
      • Hybrid Retrieval
      • Hyde Retrieval
      • VectorDB Retrieval
    • LLM
    • DB
      • MongoDB
      • Pickle DB
    • Reranker
      • BM25 Reranker
      • UPR Reranker
      • TART Reranker
      • MonoT5 Reranker
      • LLM Reranker
    • Benchmark
      • Auto Evaluator
      • Dataset Evaluators
        • Qasper
        • Ko-Strategy-QA
        • Strategy-QA
        • ms-marco
  • Utils
    • Query Decomposition
    • Evidence Extractor
    • Embedding
    • Slim Vector Store
      • Pinecone Slim
      • Chroma Slim
    • File Cache
    • Linker
      • Redis Linker
      • Dynamo Linker
      • Json Linker
    • REDE Search Detector
    • Semantic Clustering
  • Pipeline
    • BasicIngestPipeline
    • BasicRunPipeline
    • RerankRunPipeline
    • ViscondeRunPipeline
  • For Advanced RAG
    • Time-Aware RAG
    • Importance-Aware RAG
Powered by GitBook
On this page
  • Overview
  • Usage
  1. Pipeline

RerankRunPipeline

PreviousBasicRunPipelineNextViscondeRunPipeline

Last updated 1 year ago

Overview

The RerankRunPipeline class is made for using reranker easily. To use this class, you can use easily without extra work. For further information about reranker, please check out Reranker documentation.

In this pipeline, first retrieval retrieves passages. Then reranker reranks retrieved passages. Finally, put retrieved passages to LLM model.

Usage

Initialize

To create an instance of RerankRunPipeline, you need to provide an instance of a class and class. Optionally, you can specify which model you want to use.

from RAGchain.pipeline import RerankRunPipeline
from RAGchain.retrieval import BM25Retrieval
from RAGchain.reranker import MonoT5Reranker

retrieval = BM25Retrieval(save_path="path/to/your/bm25/save_path") 
reranker = MonoT5Reranker()
pipeline = RerankRunPipeline(retrieval=retrieval, reranker=reranker, llm=OpenAI()) 

Run Pipeline

The run variable is runnable of Langchain LCEL. So you can use all method for running pipeline, like invoke, stream, batch, and so on.

answer = pipeline.run.invoke({"question": "Where is the capital of Korea?"})

If you want to get used passages or relevance scores of retrieved passages, you can use get_passages_and_run method.

answers, passages, scores = pipeline.get_passages_and_run(["Where is the capital of Korea?"])
Reranker
Retrieval
Reranker
LLM