# ViscondeRunPipeline

## Overview

The `ViscondeRunPipeline` class is implementation of Visconde pipeline. You can check out Visconde paper at [here](https://arxiv.org/abs/2212.09656).

Visconde pipeline perform three task: decompose, retrieve, and aggregate. It uses [Query Decomposition](https://nomadamas.gitbook.io/ragchain-docs/utils/query-decomposition) for answering multi-hop questions. So, it is effective to answer real-world questions that need to check out multiple passages.

## Usage

#### Initialize

To create an instance of `ViscondeRunPipeline`, you need to provide an instance of a [`Retrieval`](https://github.com/NomaDamas/RAGchain-docs/blob/main/retrieval/README.md) class, and llm module to generate answer. Optionally, you can specify the instance of [query decomposition module](https://nomadamas.gitbook.io/ragchain-docs/utils/query-decomposition), a custom prompt, and other options for retrieval and use passage count for generation. FYI, you can't use chat model for this pipeline. It has a default prompt for strategyQA style multi-hop questions. You need to change prompt using PromptTemplate if you want to use another few-shot prompts.

```python
from RAGchain.pipeline import ViscondeRunPipeline
from RAGchain.retrieval import BM25Retrieval
from langchain.llms.openai import OpenAI

retrieval = BM25Retrieval(save_path="path/to/your/bm25/save_path")
pipeline = ViscondeRunPipeline(retrieval, OpenAI(model_name="babbage-002"))
```

#### Ask

You can ask a question to the LLM model and get an answer using `invoke` method. Also, you can use another LCEL's method like stream or batch as well.

```python
question = "Is reranker and retriever have same role?"
answer = pipeline.run.invoke({"question": question})
print(answer)
```

If you want to get used passages or relevance scores of retrieved passages, you can use `get_passages_and_run` method.

```python
answers, passages, scores = pipeline.get_passages_and_run([question])
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://nomadamas.gitbook.io/ragchain-docs/pipeline/visconde-run-pipeline.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
