Core Concepts
Teams
A Team
is a required top-level organizational and authorization construct.
Folders
A Folder
is an optional organizational structure to hold Tests
Schema
A Schema
is required by Horreum to define the meta-data associated with a Run
It allows Horreum to process the JSON content to provide validation, charting, and change detection
A Schema
defines the following;
- An optional expected structure of a Dataset via JSON validation schemas
- Required
Labels
that define how to use the data in the JSON document - Optional
Transformers
, to transform uploaded JSON documents into one or more datasets.
A Schema can apply to an entire Run JSON document, or parts of a Run JSON document
Label
A Label
is required to define how metrics are extracted from the JSON document and processed by Horreum.
Labels are defined by one or more required Extractors
and an optional Combination Function
There are 2 types of Labels:
Metrics Label
: Describes a metric to be used for analysis, e.g. “Max Throughput”, “process RSS”, “startup time” etcFiltering Label
: Describes a value that can be used for filteringDatasets
and ensuring that datasets are comparable, e.g. “Cluster Node Count”, “Product version” etc
A Label
can be defined as either a Metrics
label, a Filtering
label, or both.
Filtering Labels
are combined into Fingerprints
that uniquely identify comparable Datasets within uploaded Runs.
Extractor
An Extractor
is a required JSONPath expression that refers to a section of an uploaded JSON document. An Extractor can return on of;
- A scalar value
- An array of values
- A subsection of the uploaded JSON document
Note
In the majority of cases, an Extractor will simply point to a single, scalar valueCombination Function:
A Combination Function is an optional Javascript function that takes all Extractor values as input and produces a Label value.
Note
In the majority of cases, the Combination Function is simply an Identity function with a single input and does not need to be definedTest
A Test
is a required repository for particular similar runs
and datasets
You can think of a `test`` as a repo for the results of a particular benchmark, i.e. a benchmark performs a certain set of actions against a system under test
Test
Runs
can have different configurations, making them not always directly comparable, but the Run results stored under one Test can be filtered by their Fingerprint
to ensure that all Datasets
used for analysis are comparable
Run
A Run
is a particular single upload instance of a Test
A Run
is associated with one or more Schemas
in order to define what data to expect, and how to process the JSON document
Transformers
A Transformer
is optionally defined on a Schema
that applies required Extractors
and a required Combination Function
to transform a Run
into one or more Datasets
Transformers
are typically used to;
- Restructure the JSON document. This is useful where users are processing JSON documents that they do not have control of the structure, and the structure is not well defined
- Split a
Run
JSON output into multiple, non-comparableDatasets
. This is useful where a benchmark iterates over a configuration and produces a JSON output that contains multiple results for different configurations
A Schema
can have 0, 1 or multiple Transformers
defined
Note
In the majority of cases, theRun
data does not need to be transformed and there is a one-to-one direct mapping between Run
and Dataset
. In this instance, an Identity Transformer
is used and does not need to be defined by the userDataset
A Dataset
is either the entire Run
JSON document, or a subset that has been produced by an optional Transformer
It is possible for a Run
to include multiple Datasets
, and the Transformer(s)
defined on a Schema
associated with the Run
has the job of parsing out the multiple Datasets
Note
In most cases, there is a 1:1 relationship between aRun
and a Dataset
, when the Dataset
is expected to have one unified set of results to be analyzed togetherFingerprint
A Fingerprint
is combination of Filtering labels
that unique identifies comparable datasets
within a test
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.