This is the multi-page printable view of this section. Click here to print.
Core Tasks
1 - API Keys
Horreum users authenticate using the Open ID Connect (OIDC) authentication protocol. However, in some circumstances it may not be possible to use OIDC. For those cases there is an alternative authentication mechanism in the form of API Keys.
API keys are tokens that grant access to certain parts of the Horreum API. In general, API Keys are associated with a user.
API Key creation
A user can manage its API Keys on the API Keys
tab on the user profile (click on the name in upper right corner). The button New API Key
brings up the Create new API Key
dialog.
The key name is optional, but it’s recommended that every key has a name that reflects the reason it was created and/or where it’s used. At the moment the API keys give the same level of access the user has, but in the future there may be different types of keys that the user has to choose from. After the request is submitted the new key is revealed.
The user should copy the API Key, and is responsible for keeping it secret. Horreum API Keys are non-retrievable, which means they are not stored anywhere on the system. Once this dialog is closed it’s not possible to display the API key again.
API Key usage
Horreum clients allow an API Key to be specified in their configuration. Other clients need to add the X-Horreum-API-Key
HTTP header with the key value. For example, the curl
command to use should be:
$ curl -H 'X-Horreum-API-Key: HUSR_00000000_0000_0000_0000_000000000000' [...]
API Key management
The user can manage its own API Keys in the API Keys
tab on their user profile. The fields displayed for each key are:
- Key name: The name given to the key
- Key type: The level of access the key provides
- Creation date: The date the key was created
- Last usage: The date the key was used for the last time
- Status: Shows if the key is valid or has expired / has been revoked. It also shows if the key is about to expire.
For each key there are two possible actions:
- Rename: Changes the key name
- Revoke: Remove the access rights for the key. This action can’t be undone. The user should revoke a compromised key immediately.
API Key expiration
Keys that have not been used for a while (last 30 days) are automatically revoked. Users are notified in advance using the methods provided for Personal notifications
.
2 - Add Users and Set Roles
Horreum is a multi-tenanted system, hosting data for multiple teams: for a detailed discussion please see User management and security.
Users who have a -manager
role (e.g. engineers-manager
) can create new users, add/remove existing users to to the team and manage permissions. In order to do so visit your profile settings by clicking on your name in upper right corner, and switch to the Managed Teams tab.
From the Select team drop down, select one of teams you manage. Search for existing users in Find User.. search box and use the arrows in the middle to add or remove members of the team. Checkboxes allow you to add/remove roles for this team. When you’re finished with the changes press the Save button at the bottom.
You can also use this screen to create new users to become members of your team.
3 - Create new test
After starting Horreum and logging in you’ll find a blank table with New Test button above:
All you need to fill in here is the test name. Test names must be unique within Horreum.
When you press the Save button on the bottom several other tabs appear on the top; you can go to Access. The test was created with Private access rights; if you want anonymous users to see your tests you can set it to Public and save the test.
When you’re done you can navigate to Tests in the bar on the top of the screen and see that the test was really created:
The test is there but in the Run Count column you don’t see any runs. Now you can continue with uploading data into Horreum.
4 - Manage Reports
Background
Creation of Report Configurations
in Horreum
is straightforward enough but deletion can be not obvious. A Report Configuration
can be updated or saved as a individual Report
. A useful procedure when modifying an existing Report
that functions correctly.
Report Deletion
To delete an existing Report
select the folder icon in the Total reports
column on the Report Configuations
list view. Each instance of a Report
will have a red coloured button named Delete.
The same task can be repeated using the web API to delete a Report
. Copy and paste this into your shell. Note, modify the REPORT_ID parameter. The response to expect is a 204 HTTP response code for a successful deletion.
REPORT_ID=<your_report_id_here>
curl 'http://localhost:8080/api/report/table/'$REPORT_ID -H 'content-type: application/json' -H 'X-Horreum-API-Key: '$API_KEY --request DELETE -v
5 - Import and Export Tests and Schemas
Prerequisites:
- Horreum is running
- To export you have previously defined a
Schema
for the JSON data you wish to analyze, please see Define a Schema- To export you have previously defined a Test, please see Create new Test
Background
To simplify copying Tests and Schemas between Horreum instances Horreum provides a simple API to export and import new Tests and Schemas. Horreum also support updating exising Schemas and Tests by importing Tests or Schemas with existing Id’s. For security reasons you need to be part of the team or an admin to be able to import/export Tests/Schemas.
TestExport
The export object for Tests is called TestExport and contains a lot of other fields in addition to what’s defined in Test. This includes, variables, experiments, actions, subscriptions, datastore and missingDataRules. This is to simplify the import/export experience and make sure that all the data related to a Test has a single entrypoint with regards to import and export. Note that secrets defined on Action are not portable between Horreum instances and there might be security concerns so they are omitted. The apiKey and password attributs defined on the config field in Datastore are also omitted and will have to be manually added in a separate step.
TestSchema
The export object for Schemas is called SchemaExport and contains other fields in addition to what’s defined in Schema. This includes, labels, extractors and transformers. This is to simplify the import/export experience and make sure that all the data related to a Schema has a single entrypoint with regards to import and export.
Import/Export using the UI
Export or Import as an update to an existing Test/Schema
Select the Test/Schema, select the Export link and you will be given the option to import/export as seen here:
Import a new Test/Schema
Click on Schema/Test and there is a button where you can select either Import Schema or Import Test. Select and upload file.
Import Schemas
curl 'http://localhost:8080/api/schema/import/' \
-s -X POST -H 'content-type: application/json' \
-H 'X-Horreum-API-Key: '$API_KEY \
-d @/path/to/schema.json
If you are unfamiliar with generating an API Key, please see Upload Run.
Import Tests
curl 'http://localhost:8080/api/test/import/' \
-s -X POST -H 'content-type: application/json' \
-H 'X-Horreum-API-Key: '$API_KEY \
-d @/path/to/test.json
Export Schemas
SCHEMAID='123'
curl 'http://localhost:8080/api/schema/export/?id='$SCHEMAID \
-H 'X-Horreum-API-Key: '$API_KEY \
-O --output-dir /path/to/folder
Export Tests
TESTID='123'
curl 'http://localhost:8080/api/test/export/?id=$TESTID' \
-s -X POST -H 'content-type: application/json' \
-H 'X-Horreum-API-Key: '$API_KEY \
-O --output-dir /path/to/folder
6 - Upload Run
Horreum accepts any valid JSON as the input. To get maximum out of Horreum, though, it is recommended to categorize the input using JSON schema.
Operations are authorized via API Keys, for details on how to generate an API Key, please refer to API keys.
If you’re running your tests in Jenkins you can skip a lot of the complexity below using Horreum Plugin. This plugin supports both Jenkins Pipeline and Freeform jobs.
Uploading Data
There are several mandatory parameters for the upload:
- JSON
data
itself test
: Name or numeric ID of an existing test in Horreum. You can also use JSON Path to fetch the test name from the data, e.g.$.info.benchmark
.start
,stop
: Timestamps when the run commenced and terminated. This should be epoch time in milliseconds, ISO-8601-formatted string in UTC (e.g.2020-05-01T10:15:30.00Z
) or a JSON Path to any of above.owner
: Name of the owning role with-team
suffix, e.g.engineers-team
.access
: one ofPUBLIC
,PROTECTED
orPRIVATE
. See more in data access.
Optionally you can also set schema
with URI of the JSON schema, overriding (or providing) the $schema
key in the data
. You don’t need to define the schema in Horreum ahead, though, the validation is triggered automatically whenever you add a Run or update the schema, and you’ll see the result icon in Runs/Datasets listing for given test.
The upload itself can look like:
API_KEY='HUSR_00000000_0000_0000_0000_000000000000'
TEST='$.info.benchmark'
START='2021-08-01T10:35:22.00Z'
STOP='2021-08-01T10:40:28.00Z'
OWNER='dev-team'
ACCESS='PUBLIC'
curl 'http://localhost:8080/api/run/data?test='$TEST'&start='$START'&stop='$STOP'&owner='$OWNER'&access='$ACCESS \
-s -X POST -H 'content-type: application/json' \
-H 'X-Horreum-API-Key: '$API_KEY \
-d @/path/to/data.json
Assuming that you’ve created the test let’s try to upload this JSON document:
{
"$schema": "urn:my-schema:1.0",
"info": {
"benchmark": "FooBarTest",
"ci-url": "https://example.com/build/123"
},
"results": {
"requests": 12345678,
"duration": 300 // the test took 300 seconds
}
}
When you open Horreum you will see that your tests contains single run in the ‘Run Count’ column.
Click on the run count number with open-folder icon to see the listing of all runs for given test:
Even though the uploaded JSON has $schema
key the Schema column in the table above is empty; Horreum does not know that URI yet and can’t do anything with that. You can hit the run ID with arrow icon in one of the first columns and see the contents of the run you just created:
This page shows the Original Run and an empty Dataset #1. The Dataset content is empty because without the Schema it cannot be used in any meaningful way - let’s create the schema and add some labels.
7 - Define Functions
Prerequisites: You have already
- uploaded some data
Using Functions
in Horreum
is a feature that provides a great deal of bespoke functionality to Horreum
that is under the control of a user. The ability to use a Function
written in JavaScript.
These Functions can be categorized as:
- Selector (filter) - used for applying conditions on input data to return output data
- Transformation - used for changing the data model
- Combination - used for computing a scalar value
- Rendering - reformatting the presentation of the data
When using Horreum
you will find Functions
used in these Horreum objects:
Function Type | Horreum Object | Use |
---|---|---|
Selector | Test | Experiment , Timeline , Fingerprint , Variable |
Report | Filtering , Category , Series , Scale Label | |
Transformation | Schema | Transformer |
Report | Components | |
Combination | Schema | Label , Transformer |
Rendering | Test | View |
Report | Category , Series , Scale |
Making use of Horreum Functions
JavaScript ECMAScript 2023 specification is available throughout Horreum Functions
.
Example Filtering Function
These Functions
rely on a condition evaluation to return a boolean value.
The following will filter based on the individual Label
Extractor
only having the value 72.
value => value === 72
Example Transformation Functions
Transformation Functions
rely on a returned value that is an Object, Array or scalar value.
This Transformation Function
relies on 12 Extractors
setup on the Schema
Label
. Each Extractor
configured to obtain an Array of data items (except buildId
and buildUrl
).
Input JSON
{
"runtimeName": "spring-native",
"rssStartup": 55,
"maxRss": 15,
"avBuildTime": 1234,
"avTimeToFirstRequest": 5,
"avThroughput": 25,
"rssFirstRequest": 5000,
"maxThroughputDensity": 15,
"buildId": "x512",
"buildUrl": "http://acme.com",
"quarkusVersion": "0.1",
"springVersion": "3.0"
}
This Transformation Function
uses the map
JavaScript function to modify property names, the number of JSON properties and values. In the transformation runtime
and buildType
are created from the filtered runtimeName
property. The version
property is conditionally derived from runtimeName
depending on the presence of the text spring.
({runtimeName, rssStartup, maxRss, avBuildTime, avTimeToFirstRequest, avThroughput, rssFirstRequest, maxThroughputDensity, buildId, buildUrl, quarkusVersion, springVersion}) => {
var map = runtimeName.map((name, i) => ({
runtime: name.split('-')[0],
buildType: name.split('-')[1],
rssStartup: rssStartup[i],
maxRss: maxRss[i],
avBuildTime: avBuildTime[i],
avTimeToFirstRequest: avTimeToFirstRequest[i],
avThroughput: avThroughput[i],
rssFirstRequest: rssFirstRequest[i],
maxThroughputDensity: maxThroughputDensity[i],
buildId: buildId,
buildUrl: buildUrl,
version: ((name.split('-')[0].substring(0, 6) == 'spring' ) ? springVersion: quarkusVersion )
}))
return map;
}
Output JSON
{
"runtime": "spring",
"buildType": "native",
"rssStartup": 55,
"maxRss": 15,
"avBuildTime": 1234,
"avTimeToFirstRequest": 5,
"avThroughput": 25,
"rssFirstRequest": 5000,
"maxThroughputDensity": 15,
"buildId": "x512",
"buildUrl": "http://acme.com",
"version": "3.0"
}
Example Combination Functions
Combination Functions
rely on a returned value that is an Object, Array or scalar value.
Input JSON
[5,10,15,20,10]
This Function
will conditionally reduce an array of values unless there is only a single value of type number.
value => typeof value === "number" ? value : value.reduce((a, b) => Math.max(a, b))
Output JSON
20
The following example returns a scalar Float value.
Input JSON
{
"duration": "62.5",
"requests": "50"
}
This Function
will create a value of the amount of time per request with the exponent rounded to 2 figures.
value => (value.duration / value.requests).toFixed(2)
Output JSON
1.25
Example Rendering Functions
A Rendering Function
will change the presentation or add metadata for rendering in the UI.
Input JSON
Hello World
This Rendering Function
adds HTML markup and sets the color of the span text.
value => '<span style="color: Tomato";>' + value + '</span>'
Output text
<span style="color: Tomato;">Hello World</span>
Troubleshooting Functions.
See the section dedicated to Troubleshooting Functions
.
8 - Dataset Experiment Evaluation
Using the Dataset Experiment Evaluation View
Using the Experiment evaluation window you can quickly see the values and the relative difference of a Run.
Start by initially loading the Test. Then click the Dataset list
button.
Then select the individual Test Run you want to compare with it’s Baseline.
By navigating to the uploaded run Dataset view page you will see a button “Evaluate experiment”. Clicking this button opens a window revealing the comparison result for the current Run and the Baseline Run..
Results show the values then the percentage difference.
Datasets Comparison View
Horreum provides multiple Run comparisons in the Dataset Comparison View. We can filter based on the Filter labels defined in the Schema.
Start by initially loading the Dataset list. Then click the button “Select for comparison”.
Next the Comparison view is displayed. This is where filters are set.
Select a number of Runs to be compared using the “Add to comparison” button. Then click the “Compare labels”. Displayed next is the Labels Comparison view.
Displayed here are multiple Datasets. Schema Labels can be expanded to display a bar graph representing each Run.
In this guide two things were shown. How to compare an individual Dataset. Followed by comparing multiple Datasets.
9 - Transform Runs to Datasets
Horreum stores data in the JSON format. The raw document uploaded to repository turns into a Run, however most of operations and visualizations work on Datasets. By default there’s a 1-on-1 relation between Runs and Datasets; the default transformation extracts objects annotated with a JSON schema (the $schema
property) and puts them into an array - it’s easier to process Datasets internally after that. It is possible to customize this transformation, though, and most importantly - you can create multiple Datasets out of a single Run. This is useful e.g. when your tooling produces single document that contains results for multiple tests, or with different configurations. With the Run split into more Datasets it’s much easier to display and analyze these results individually.
We assume that you’ve already created a test, uploaded some data and defined the Schema.
In this example we use test acme-regression
with the basic schema urn:acme-schema:1.0
and uploaded JSON:
{
"$schema": "urn:acme-schema:1.0",
"testName": ["Test CPU1", "Test CPU2", "Test CPU3"],
"throughput": [0.1, 0.2, 0.3]
}
Defining a Transformer
Here we will show how to define the transformation from the raw input into individual Datasets so that each testName and throughput goes to a separate set.
As the structure of the documents for individual tests (stored in Dataset) differs from the input document structure (urn:acme-schema:1.0
) we will define a second Schema - let’s use the URI urn:acme-sub-schema:1.0
.
Back in the acme-schema
we switch to Transformers tab and add a new CpuDatasetTransformer
. In this Transformer we select the acme-sub-schema
as Target schema URI: the $schema
property will be added to each object produced by this Transformer. An alternative would be setting the target schema manually in the Combination function below. Note that it is vital that each Transformer annotates its output with some schema - without that Horreum could not determine the structure of data and process it further.
We add two extractors: testName and _throughput_that will get the values from the raw JSON object. These values are further processed in the Combination function. If the function is not defined the result will be an object with properties matching the extractor names - the same object as is the input of this function.
As a result, the transformer will return an array of objects where each element contributes to a different DataSet.
Use transformers in the test
Each schema can define multiple Transformers, therefore we have to assign our new transformer to the acme-regression
test.
Tests > Transformers
After Saving the test press Recalculate datasets and then go to Dataset list to see the results. You will find 3 Datasets, each having a separate test result.
Use labels for the fingerprint
When you’ve split the Run into individual Datasets it’s likely that for purposes of Change Detection you want to track values from each test individually. Horreum can identify such independent series through a Fingerprint: set of labels with an unique combination of values.
Go to the acme-sub-schema
and define the labels testname and throughput: the former will be used for the Fingerprint, the latter will be consumed in a Change Detection Variable.
Then switch to Test > Change detection and use thos labels. The Fingerprint filter is not necessary here (it would let you exclude some Datasets from Change detection analysis.
After saving and recalculation you will see the new data-points in Changes
In this guide we transformed the Run from the batch results arrays to individual Datasets. Then we extracted data using Labels and them for Change detection.
10 - Configure Change Detection
Prerequisites: You have already
- uploaded some data
- defined the Schema with some labels.
One of the most important features of Horreum is Change Detection - checking if the new results significantly differ from previous data.
Horreum uses Change Detection Variables to extract and calculate Datapoints - for each dataset and each variable it creates one datapoint.
Horreum compares recent datapoint(s) to older ones and if it spots a significant difference it emits a Change, and sends a notification to subscribed users or teams.
User can later confirm that there was a change (restarting the history from this point for the purpose of change detection) or dismiss it.
Let’s go to the test and switch to the ‘Change Detection’ tab:
We have created one variable Throughput
using the label throughput
. You could use multiple labels and combine them using a Javascript function, similar to the way you’ve combined JSON Path results when defining the label. But in this case further calculation is not necessary.
One chart can display series of datapoints for multiple variables: you can use Groups to plot the variables together. The dashboard will contain one chart for each group and for each variable without a group set.
If you scroll down to the ‘Conditions’ section you’ll find the criteria used to detect changes. By default there are two conditions: the first condition effectively compares last datapoint vs. mean of datapoints since last change, the second condition compares a mean of short sliding window to the mean of preceding datapoints, with more strict thresholds. You can change or remove these or add another criteria such as checking values vs. fixed thresholds.
The default conditions do not distinguish changes causing increase or decrease in given metric. Even if the effect is positive (e.g. decrease in latency or increase in throughput) users should know about this - the improvement could be triggered by a functional bug as well.
In the upper part of this screen you see a selection of Fingerprint labels and filter; in some cases you might use the same Test for runs with different configuration, have multiple configurations or test-cases within one run (and produce multiple Datasets out of one Run) or otherwise cause that the Datasets are not directly comparable, forming several independent series of datasets. To identify those series you should define one or more labels, e.g. label arch
being either x86
or arm
and label cpus
with the number of CPUs used in the test. Each combination of values will form an unique ‘fingerprint’ of the Dataset, had we used 3 values for cpu
there would be 6 possible combinations of those labels. When a new Dataset is added only those older Datasets with the same fingerprint will be used to check against. The filter can be used to limit what datasets are suitable for Change Detection at all - you could also add a label for debug vs. production builds and ignore debug builds.
When we hit the save button Horreum asks if it should recalculate datapoints from all existing runs as we have changed the regression variables definition. Let’s check the debug logs option and proceed with the recalculation.
When the recalculation finishes we can click on the ‘Show log’ button in upper right corner to see what was actually executed - this is convenient when you’re debugging a more complicated calculation function. If there’s something wrong with the labels you can click on the Dataset (1/1 in this case) and display label values and calculation log using the buttons above the JSON document.
If everything works as planned you can close the log and click the ‘Go to series’ button or use the ‘Changes’ link in navigation bar on the top and select the test in the dropbox. If the uploaded data has a timestamp older than 1 month you won’t see it by default; press the Find last datapoints button to scroll back. You will be presented with a chart with single dot with result data (we have created only one Run/Dataset so far so there’s only one datapoint).
In order to receive notifications users need to subscribe to given test. Test owner can manage all subscriptions in the ‘Subscriptions’ tab in the test. Lookup the current user using the search bar, select him in the left pane and use the arrow to move him to the right pane.
You can do the same with teams using the lists in the bottom. All available teams are listed in the left pane (no need for search). When certain users are members of a subscribed team but don’t want to receive notifications they can opt out in the middle pair of lists.
When you save the test and navigate to ‘Tests’, you can notice a bold eye icon on the very left of the row. This signifies that you are subscribed. You can click it and manage your own subscriptions here as well.
Despite being subscribed to some tests Horreum does not yet know how should it notify you. Click on your name with user icon next to the ‘Logout’ button in the top right corner and add a personal notification. Currently only the email notification is implemented; use your email as the data field. You can also switch to ‘Team notifications’ tab and set a notification for an entire team. After saving Horreum is ready to let you know when a Change is detected.
Developers often run performance tests nightly or weekly on a CI server. After setting up the workflow and verifying that it works you might be content that you have no notifications in your mailbox, but in fact the test might get broken and the data is not uploaded at all. That’s why Horreum implements watchdogs for periodically uploaded test runs.
Go to the ‘Missing data notifications’ tab in the test and click the ‘Add new rule…’ button. If you expect that the test will run nightly (or daily) set the ‘Max staleness’ to a value somewhat greater than 24 hours, e.g. 25 h
. We don’t need to filter runs using the Labels and Condition so you might keep them empty - this will be useful e.g. if you have datasets with different fingerprints. The rules are periodically checked and if there is no dataset matching the filter with start timestamp in last 25 hours the subscribed users and teams will receive a notification.
In the ‘General’ tab it is possible to switch off all notifications from this test without unsubscribing or changing rules. This is useful e.g. when you bulk upload historical results and don’t want to spam everyone’s mailbox.
You can continue exploring Horreum in the Actions guide.
11 - Configure Actions
In the change detection guide you’ve seen how can you inform users about changes in your project’s performance. You can use another mechanism to notify other services about noteworthy events, e.g. bots commenting on version control system, updating status page or triggering another job in CI: the webhooks. Event Actions
are produced in the following situations:
- New Run event
- New Change Detection event
- Experiment Result event
Additionally, Global Actions
have one additional event type:
- New Test event
This allows remote systems or users to rely on automation that can reduce the necessity of manual tasks. Since calling arbitrary services in the intranet is security-sensitive, Horreum administrators have to set up an Action Allow list
of URL prefixes (e.g. domains). There are a variety of Webhook implementations provided by Horreum:
- Generic HTTP POST method request
- Github Issue Comment
- Create a new Github Issue
- Post a message to a Slack channel
As a user with the admin
role you can see ‘Administration’ link in the navigation bar on the top; go there and in the Global Actions
tab hit the ‘Add prefix’ button:
When you save the modal you will see the prefix appearing in the table. Then in the lower half of the screen you can add global actions: whenever a new Test is created, a Run
is uploaded or Change is emitted Horreum can trigger an action.
Test owners can set Actions
for individual tests, too. Go to the Test configuration, ‘Actions’ tab and press the ‘New Action’ button. This works identically to the global actions.
Even though non-admins (in case of global hooks) and non-owners of given test cannot see the URL it is advisable to not use any security sensitive tokens.
HTTP Web Hook Action
Horreum can issue an HTTP POST request to a registered URL prefix, using the new JSON-encoded entity as a request body. You can also use a JSON path1 wrapped in ${...}
, e.g. ${$.data.foo}
in the URL to refer to the object sent.
GitHub Issue Create Action
Horreum can create a GitHub issue against a named user (or organization) repo on a “change/new” event type. Horreum creates a GitHub formatted markdown representing the event.
You supply the owner, repository, issue title, and a GitHub token for authentication.
GitHub Issue Comment Action
Horreum can add a comment to an existing GitHub issue on an “experiment_result/new” event, identifying the issue either by owner, repo, and issue number, or by a complete GitHub URI, and a GitHub token for authentication.
Slack Channel Message Action
Horreum can post a comment to a Slack channel on a “test/new”, “change/new”, “experiment_result/new”, or “test/new” event. Horreum creates a formatted markdown representing the event.
You supply the Slack channel ID, and a Slack app OAuth token.
In this case the JSON path is evaluated in the application, not in PostgreSQL, therefore you need to use the Jayway JSON Path syntax - this is a port of the original Javascript JSON Path implementation. ↩︎
12 - Re-transform a Run
Re-transforming Dataset(s) in a Run is useful after any of the following tasks are completed in Horreum
- Changed the selection of Transformers in a Test
- Changed a Transformer’s definition
- Changed Schema
While re-transforming isn’t necessary for existing Dataset(s) to continue operating normally an update with the latest derived values is useful to resolve any issues with incomplete derived values. These are the steps for a Run:
- Navigate to the Test Run List
- Select the individual Run
- Click the blue coloured button with the text “Re-transform datasets”
Alternatively, on the Test edit page
- Select the Transformers tab
- Click the Re-transform datasets button
- Accept the prompt to re-transform datasets by clicking the button
13 - Define a Schema
Prerequisites: You have already
- uploaded some data
In order to extract data from the Run JSON document we need to annotate it with $schema
and tell Horreum how to query it.
Make sure that you’re logged in, on the upper bar go to Schemas and click ‘New Schema’ button:
Fill in the name and URI (urn:my-schema:1.0
if you’re using the uploaded example) and hit the Save button on the bottom.
Switch tab to ‘Labels’ and add two labels: let’s call first ci-url
where you’d extract just single item from the JSON document using PostgreSQL JSON Path $.info."ci-url"
. In that case you can ignore the combination function as you don’t need to combine/calculate anything. You can uncheck the ‘filtering’ checkbox as you likely won’t search your datasets based on URL.
Combination Functions
In the situation a derived metric is necessary the Combination Function
is used to calculate the value.
Here we create a seperate Label called throughput
and we extract two elements from the JSON document: requests
using $.results.requests
and duration
using $.results.duration
. In this case the input to the calculation function will be an object with two fields: requests
and duration
. You can get throughput by adding combination function (in JavaScript) in the lambda syntax value => value.requests / value.duration
. Had we used only one JSON Path the only argument to the function would be directly the result of the JSON Path query.
Note that this function is calculated server-side and it is cached in the database. It is automatically recalculated when the label definition changes; the datasets are processed asynchronously, though, so there might be some lag.
Finally hit the Save button, go to Tests and in the table click on the 1 with folder icon in Datasets column. (Note that by default a Run produces single dataset - more about this in Transform Runs into Datasets. This will bring you to the listing of datasets in this test, and this time you can see in Schema tab that the schema was recognized.
However you still don’t see the labels - for that click on the ‘Edit test’ button and switch to ‘Views’ tab. The ‘Default’ View is already created but it’s empty; hit the ‘Add component’ button on the right side twice and fill in the columns, using the labels we’ve created before. We can further customize the ‘Throughput’ by adding the “reqs/s” suffix. Note that this function can return any HTML, this will be included into the page as-is. The rendering happens client-side and you have the dataset entity (not just the JSON document) as the second argument to the render function. Third argument would be your personal OAuth token.
It’s not necessary to turn URLs into hyperlinks, though; Horreum will do that for you.
To see the result click on the Save button and then on ‘Dataset list’ in the upper part. Now you’ll see the CI link and throughput columns in place. If the labels are not calculated correctly you can enter the Dataset by clicking on its ID in the Run column and explore Label values through the button above the JSON document. If there was e.g. a syntax error in the Javascript function you could find an error in the ‘Labels log’ there, too.
You might be wondering why you can’t set the JSON path directly in the view; Concepts explains why this separation is useful when the format of your data evolves. Also, label defined once can be used on multiple places, e.g. for Change Detection.