# Strategy-QA

## Overview

StartegyQA is a open-domain question answering dataset based on Wikipedia articles. It contains questions and answers about Wikipedia articles. Also, all questions are multi-hop question, which needs to retrieve multiple paragraphs to answer the question. It is great for evaluating performance of your pipeline's reasoning ability. Currently, we do not support answer evaluation for this dataset, because all answer type in this dataset is True/False. You can easily access answer of each question using `EvaluateResult.each_results` property.

## Example Use

```Python
from RAGchain.benchmark.dataset import StrategyQAEvaluator

pipeline = <your pipeline>
retrievals = [<your retrieval>]
db = <your db>

evaluator = StrategyQAEvaluator(pipeline, evaluate_size=20)
evaluator.ingest(retrievals, db) # ingest dataset to db and retrievals
result = evaluator.evaluate()

# print result summary (mean values)
print(result.results)
# print result DataFrame
print(result.each_results)
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://nomadamas.gitbook.io/ragchain-docs/ragchain-structure/benchmark/dataset-evaluator/strategy-qa.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
