Skip to content

Commit eb7555c

Browse files
Add Docusaurus version v0.3.0
- Created version snapshot in versioned_docs/version-v0.3.0/ - Updated versions.json with new version - Built and deployed multi-version site 🤖 Generated by Docusaurus versioning workflow
1 parent e6cd6aa commit eb7555c

File tree

379 files changed

+31847
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

379 files changed

+31847
-0
lines changed
Lines changed: 163 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,163 @@
1+
# Evaluation
2+
3+
## Evaluation Concepts
4+
5+
The Llama Stack Evaluation flow allows you to run evaluations on your GenAI application datasets or pre-registered benchmarks.
6+
7+
We introduce a set of APIs in Llama Stack for supporting running evaluations of LLM applications:
8+
- `/datasetio` + `/datasets` API
9+
- `/scoring` + `/scoring_functions` API
10+
- `/eval` + `/benchmarks` API
11+
12+
This guide goes over the sets of APIs and developer experience flow of using Llama Stack to run evaluations for different use cases. Checkout our Colab notebook on working examples with evaluations [here](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing).
13+
14+
The Evaluation APIs are associated with a set of Resources. Please visit the Resources section in our [Core Concepts](../concepts/index.mdx) guide for better high-level understanding.
15+
16+
- **DatasetIO**: defines interface with datasets and data loaders.
17+
- Associated with `Dataset` resource.
18+
- **Scoring**: evaluate outputs of the system.
19+
- Associated with `ScoringFunction` resource. We provide a suite of out-of-the box scoring functions and also the ability for you to add custom evaluators. These scoring functions are the core part of defining an evaluation task to output evaluation metrics.
20+
- **Eval**: generate outputs (via Inference or Agents) and perform scoring.
21+
- Associated with `Benchmark` resource.
22+
23+
## Evaluation Providers
24+
25+
Llama Stack provides multiple evaluation providers:
26+
27+
- **Meta Reference** (`inline::meta-reference`) - Meta's reference implementation with multi-language support
28+
- **NVIDIA** (`remote::nvidia`) - NVIDIA's evaluation platform integration
29+
30+
### Meta Reference
31+
32+
Meta's reference implementation of evaluation tasks with support for multiple languages and evaluation metrics.
33+
34+
#### Configuration
35+
36+
| Field | Type | Required | Default | Description |
37+
|-------|------|----------|---------|-------------|
38+
| `kvstore` | `RedisKVStoreConfig \| SqliteKVStoreConfig \| PostgresKVStoreConfig \| MongoDBKVStoreConfig` | No | sqlite | Key-value store configuration |
39+
40+
#### Sample Configuration
41+
42+
```yaml
43+
kvstore:
44+
type: sqlite
45+
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/meta_reference_eval.db
46+
```
47+
48+
#### Features
49+
50+
- Multi-language evaluation support
51+
- Comprehensive evaluation metrics
52+
- Integration with various key-value stores (SQLite, Redis, PostgreSQL, MongoDB)
53+
- Built-in support for popular benchmarks
54+
55+
### NVIDIA
56+
57+
NVIDIA's evaluation provider for running evaluation tasks on NVIDIA's platform.
58+
59+
#### Configuration
60+
61+
| Field | Type | Required | Default | Description |
62+
|-------|------|----------|---------|-------------|
63+
| `evaluator_url` | `str` | No | http://0.0.0.0:7331 | The url for accessing the evaluator service |
64+
65+
#### Sample Configuration
66+
67+
```yaml
68+
evaluator_url: ${env.NVIDIA_EVALUATOR_URL:=http://localhost:7331}
69+
```
70+
71+
#### Features
72+
73+
- Integration with NVIDIA's evaluation platform
74+
- Remote evaluation capabilities
75+
- Scalable evaluation processing
76+
77+
## Open-benchmark Eval
78+
79+
### List of open-benchmarks Llama Stack support
80+
81+
Llama stack pre-registers several popular open-benchmarks to easily evaluate model performance via CLI.
82+
83+
The list of open-benchmarks we currently support:
84+
- [MMLU-COT](https://arxiv.org/abs/2009.03300) (Measuring Massive Multitask Language Understanding): Benchmark designed to comprehensively evaluate the breadth and depth of a model's academic and professional understanding
85+
- [GPQA-COT](https://arxiv.org/abs/2311.12022) (A Graduate-Level Google-Proof Q&A Benchmark): A challenging benchmark of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry.
86+
- [SimpleQA](https://openai.com/index/introducing-simpleqa/): Benchmark designed to access models to answer short, fact-seeking questions.
87+
- [MMMU](https://arxiv.org/abs/2311.16502) (A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI): Benchmark designed to evaluate multimodal models.
88+
89+
You can follow this [contributing guide](../references/evals_reference/index.mdx#open-benchmark-contributing-guide) to add more open-benchmarks to Llama Stack
90+
91+
### Run evaluation on open-benchmarks via CLI
92+
93+
We have built-in functionality to run the supported open-benchmarks using llama-stack-client CLI
94+
95+
#### Spin up Llama Stack server
96+
97+
Spin up llama stack server with 'open-benchmark' template
98+
```
99+
llama stack run llama_stack/distributions/open-benchmark/run.yaml
100+
101+
```
102+
103+
#### Run eval CLI
104+
There are 3 necessary inputs to run a benchmark eval
105+
- `list of benchmark_ids`: The list of benchmark ids to run evaluation on
106+
- `model-id`: The model id to evaluate on
107+
- `output_dir`: Path to store the evaluate results
108+
```
109+
llama-stack-client eval run-benchmark <benchmark_id_1> <benchmark_id_2> ... \
110+
--model_id <model id to evaluate on> \
111+
--output_dir <directory to store the evaluate results>
112+
```
113+
114+
You can run
115+
```
116+
llama-stack-client eval run-benchmark help
117+
```
118+
to see the description of all the flags that eval run-benchmark has
119+
120+
In the output log, you can find the file path that has your evaluation results. Open that file and you can see you aggregate evaluation results over there.
121+
122+
## Usage Example
123+
124+
Here's a basic example of using the evaluation API:
125+
126+
```python
127+
from llama_stack_client import LlamaStackClient
128+
129+
client = LlamaStackClient(base_url="http://localhost:8321")
130+
131+
# Register a dataset for evaluation
132+
client.datasets.register(
133+
purpose="evaluation",
134+
source={
135+
"type": "uri",
136+
"uri": "huggingface://datasets/llamastack/evaluation_dataset"
137+
},
138+
dataset_id="my_eval_dataset"
139+
)
140+
141+
# Run evaluation
142+
eval_result = client.eval.run_evaluation(
143+
dataset_id="my_eval_dataset",
144+
scoring_functions=["accuracy", "bleu"],
145+
model_id="my_model"
146+
)
147+
148+
print(f"Evaluation completed: {eval_result}")
149+
```
150+
151+
## Best Practices
152+
153+
- **Choose appropriate providers**: Use Meta Reference for comprehensive evaluation, NVIDIA for platform-specific needs
154+
- **Configure storage properly**: Ensure your key-value store configuration matches your performance requirements
155+
- **Monitor evaluation progress**: Large evaluations can take time - implement proper monitoring
156+
- **Use appropriate scoring functions**: Select scoring metrics that align with your evaluation goals
157+
158+
## What's Next?
159+
160+
- Check out our Colab notebook on working examples with running benchmark evaluations [here](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb#scrollTo=mxLCsP4MvFqP).
161+
- Check out our [Building Applications - Evaluation](../building_applications/evals.mdx) guide for more details on how to use the Evaluation APIs to evaluate your applications.
162+
- Check out our [Evaluation Reference](../references/evals_reference/index.mdx) for more details on the APIs.
163+
- Explore the [Scoring](./scoring.mdx) documentation for available scoring functions.

0 commit comments

Comments
 (0)