RAGChain Docs
  • Introduction
  • Quick Start
  • Installation
  • RAGchain Structure
    • File Loader
      • Dataset Loader
        • Ko-Strategy-QA Loader
      • Hwp Loader
      • Rust Hwp Loader
      • Win32 Hwp Loader
      • OCR
        • Nougat Loader
        • Mathpix Markdown Loader
        • Deepdoctection Loader
    • Text Spliter
      • Recursive Text Splitter
      • Markdown Header Splitter
      • HTML Header splitter
      • Code splitter
      • Token splitter
    • Retrieval
      • BM25 Retrieval
      • Hybrid Retrieval
      • Hyde Retrieval
      • VectorDB Retrieval
    • LLM
    • DB
      • MongoDB
      • Pickle DB
    • Reranker
      • BM25 Reranker
      • UPR Reranker
      • TART Reranker
      • MonoT5 Reranker
      • LLM Reranker
    • Benchmark
      • Auto Evaluator
      • Dataset Evaluators
        • Qasper
        • Ko-Strategy-QA
        • Strategy-QA
        • ms-marco
  • Utils
    • Query Decomposition
    • Evidence Extractor
    • Embedding
    • Slim Vector Store
      • Pinecone Slim
      • Chroma Slim
    • File Cache
    • Linker
      • Redis Linker
      • Dynamo Linker
      • Json Linker
    • REDE Search Detector
    • Semantic Clustering
  • Pipeline
    • BasicIngestPipeline
    • BasicRunPipeline
    • RerankRunPipeline
    • ViscondeRunPipeline
  • For Advanced RAG
    • Time-Aware RAG
    • Importance-Aware RAG
Powered by GitBook
On this page
  • Overview
  • Usage
  1. Pipeline

ViscondeRunPipeline

PreviousRerankRunPipelineNextFor Advanced RAG

Last updated 1 year ago

Overview

The ViscondeRunPipeline class is implementation of Visconde pipeline. You can check out Visconde paper at .

Visconde pipeline perform three task: decompose, retrieve, and aggregate. It uses for answering multi-hop questions. So, it is effective to answer real-world questions that need to check out multiple passages.

Usage

Initialize

To create an instance of ViscondeRunPipeline, you need to provide an instance of a class, and llm module to generate answer. Optionally, you can specify the instance of , a custom prompt, and other options for retrieval and use passage count for generation. FYI, you can't use chat model for this pipeline. It has a default prompt for strategyQA style multi-hop questions. You need to change prompt using PromptTemplate if you want to use another few-shot prompts.

from RAGchain.pipeline import ViscondeRunPipeline
from RAGchain.retrieval import BM25Retrieval
from langchain.llms.openai import OpenAI

retrieval = BM25Retrieval(save_path="path/to/your/bm25/save_path")
pipeline = ViscondeRunPipeline(retrieval, OpenAI(model_name="babbage-002"))

Ask

You can ask a question to the LLM model and get an answer using invoke method. Also, you can use another LCEL's method like stream or batch as well.

question = "Is reranker and retriever have same role?"
answer = pipeline.run.invoke({"question": question})
print(answer)

If you want to get used passages or relevance scores of retrieved passages, you can use get_passages_and_run method.

answers, passages, scores = pipeline.get_passages_and_run([question])
here
Query Decomposition
Retrieval
query decomposition module