1 - Add Users and Set Roles

Add new Users and Roles to Horreum to give users permissions to manage data in Horreum

Horreum is a multi-tenanted system, hosting data for multiple teams: for a detailed discussion please see User management and security.

Users who have a -manager role (e.g. engineers-manager) can create new users, add/remove existing users to to the team and manage permissions. In order to do so visit your profile settings by clicking on your name in upper right corner, and switch to the Managed Teams tab.

Manage Teams

From the Select team drop down, select one of teams you manage. Search for existing users in Find User.. search box and use the arrows in the middle to add or remove members of the team. Checkboxes allow you to add/remove roles for this team. When you’re finished with the changes press the Save button at the bottom.

You can also use this screen to create new users to become members of your team.

2 - Create new test

Create a new Test to store benchmark Run data

After starting Horreum and logging in you’ll find a blank table with New Test button above:

Manage Teams

All you need to fill in here is the test name. Test names must be unique within Horreum.

Manage Teams

When you press the Save button on the bottom several other tabs appear on the top; you can go to Access. The test was created with Private access rights; if you want anonymous users to see your tests you can set it to Public and save the test.

Manage Teams

When you’re done you can navigate to Tests in the bar on the top of the screen and see that the test was really created:

Manage Teams

The test is there but in the Run Count column you don’t see any runs. Now you can continue with uploading data into Horreum.

3 - Manage Reports

How to manage Reports and Report configurations in Horreum

Background

Creation of Report Configurations in Horreum is straightforward enough but deletion can be not obvious. A Report Configuration can be updated or saved as a individual Report. A useful procedure when modifying an existing Report that functions correctly.

Report Deletion

Select Report configurations

To delete an existing Report select the folder icon in the Total reports column on the Report Configuations list view. Each instance of a Report will have a red coloured button named Delete.

Available Report configurations

The same task can be repeated using the web API to delete a Report. Copy and paste this into your shell. Note, modify the REPORT_ID parameter. The response to expect is a 204 HTTP response code for a successful deletion.

TOKEN=$(curl -s http://localhost:8180/realms/horreum/protocol/openid-connect/token \
    -d 'username=user' -d 'password=secret' \
    -d 'grant_type=password' -d 'client_id=horreum-ui' \
    | jq -r .access_token)
REPORT_ID=<your_report_id_here>
curl   'http://localhost:8080/api/report/table/'$REPORT_ID   -H 'content-type: application/json' -H 'Authorization: Bearer '$TOKEN --request DELETE -v

4 - Import and Export Tests and Schemas

How to import and export Tests and Schemas in Horreum

Prerequisites:

  1. Horreum is running
  2. To export you have previously defined a Schema for the JSON data you wish to analyze, please see Define a Schema
  3. To export you have previously defined a Test, please see Create new Test

Background

To simplify copying Tests and Schemas between Horreum instances Horreum provides a simple API to export and import new Tests and Schemas. Horreum also support updating exising Schemas and Tests by importing Tests or Schemas with existing Id’s. For security reasons you need to be part of the team or an admin to be able to import/export Tests/Schemas.

TestExport

The export object for Tests is called TestExport and contains a lot of other fields in addition to what’s defined in Test. This includes, variables, experiments, actions, subscriptions, datastore and missingDataRules. This is to simplify the import/export experience and make sure that all the data related to a Test has a single entrypoint with regards to import and export. Note that secrets defined on Action are not portable between Horreum instances and there might be security concerns so they are omitted. The apiKey and password attributs defined on the config field in Datastore are also omitted and will have to be manually added in a separate step.

TestSchema

The export object for Schemas is called SchemaExport and contains other fields in addition to what’s defined in Schema. This includes, labels, extractors and transformers. This is to simplify the import/export experience and make sure that all the data related to a Schema has a single entrypoint with regards to import and export.

Import/Export using the UI

Export or Import as an update to an existing Test/Schema

Select the Test/Schema, select the Export link and you will be given the option to import/export as seen here:

import-export

Import a new Test/Schema

Click on Schema/Test and there is a button where you can select either Import Schema or Import Test. Select and upload file.

Import Schemas

curl 'http://localhost:8080/api/schema/import/' \
    -s -X POST -H 'content-type: application/json' \
    -H 'Authorization: Bearer '$TOKEN \
    -d @/path/to/schema.json

If you are unfamiliar with creating the auth token please see Upload Run.

Import Tests

curl 'http://localhost:8080/api/test/import/' \
    -s -X POST -H 'content-type: application/json' \
    -H 'Authorization: Bearer '$TOKEN \
    -d @/path/to/test.json

Export Schemas

SCHEMAID='123'
curl 'http://localhost:8080/api/schema/export/?id='$SCHEMAID \
    -H 'Authorization: Bearer '$TOKEN \
    -O --output-dir /path/to/folder

Export Tests

TESTID='123'
curl 'http://localhost:8080/api/test/export/?id=$TESTID' \
    -s -X POST -H 'content-type: application/json' \
    -H 'Authorization: Bearer '$TOKEN \
    -O --output-dir /path/to/folder

5 - Upload Run

Horreum accepts any valid JSON as the input. To get maximum out of Horreum, though, it is recommended to categorize the input using JSON schema.

There are two principal ways to authorize operations:

  • Authentication against OIDC provider (Keycloak): This is the standard way that you use when accessing Horreum UI - you use your credentials to get a JSON Web Token (JWT) and this is stored in the browser session. When accessing Horreum over the REST API you need to use this for Bearer Authentication. The authorization is based on the teams and roles within those teams that you have.
  • Horreum Tokens: In order to provide access to non-authenticated users via link, or let automated scripts perform tasks Horreum can generate a random token consisting of 80 hexadecimal digits. This token cannot be used in the Authorization header; operations that support tokens usually accept token parameter.

If you’re running your tests in Jenkins you can skip a lot of the complexity below using Horreum Plugin. This plugin supports both Jenkins Pipeline and Freeform jobs.

Getting JWT token

New data can be uploaded into Horreum only by authorized users. We recommend setting up a separate user account for the load-driver (e.g. Hyperfoil) or CI toolchain that will upload the data as part of your benchmark pipeline. This user must have the permission to upload for given team, e.g. if you’ll use dev-team as the owner this role is called dev-uploader and it is a composition of the team role (dev-team) and uploader role. You can read more about user management here.

TOKEN=$(curl -s http://localhost:8180/realms/horreum/protocol/openid-connect/token \
    -d 'username=user' -d 'password=secret' \
    -d 'grant_type=password' -d 'client_id=horreum-ui' \
    | jq -r .access_token)

A note on JWT token issuer: OIDC-enabled applications usually validate the URL that issued the request vs. URL of the authentication server the application is configured to use - if those don’t match you receive 403 Forbidden response without further information. Had you used http://localhost:8180 as KEYCLOAK_URL in the example above you would get rejected in developer mode with the default infrastructure, even though localhost resolves to 127.0.0.1 - the URL has to match to what you have in horreum-backend/.env as QUARKUS_OIDC_AUTH_SERVER_URL. You can disable this check with -Dquarkus.oidc.token.issuer=any but that is definitely not recommended in production.

Using offline JWT token

Access token has very limited lifespan; when you want to perform the upload from CI script and don’t want to store the password inside you can keep an offline token. This token cannot be used directly as an access token; instead you can store it and use it to obtain a regular short-lived access token:

OFFLINE_TOKEN=$(curl -s http://localhost:8180/realms/horreum/protocol/openid-connect/token \
    -d 'username=user' -d 'password=secret' \
    -d 'grant_type=password' -d 'client_id=horreum-ui' -d 'scope=offline_access' \
    | jq -r .refresh_token)
TOKEN=$(curl -s http://localhost:8180/realms/horreum/protocol/openid-connect/token \
    -d 'refresh_token='$OFFLINE_TOKEN \
    -d 'grant_type=refresh_token' -d 'client_id=horreum-ui' \
    |  jq -r .access_token)

Note that the offline token also expires eventually, by default after 30 days.

Getting Horreum token

In order to retrieve an upload token you need to navigate to particular Test configuration page, switch to tab ‘Access’ and push the ‘Add new token’ button, checking permissions for ‘Read’ and ‘Upload’. The token string will be displayed only once; if you lose it please revoke the token and create a new one.

This token should not be used for Bearer Authentication (do not use it in the Authorization HTTP header) as in the examples below; instead you need to append &token=<horreum-token> to the query.

Uploading the data

There are several mandatory parameters for the upload:

  • JSON data itself
  • test: Name or numeric ID of an existing test in Horreum. You can also use JSON Path to fetch the test name from the data, e.g. $.info.benchmark.
  • start, stop: Timestamps when the run commenced and terminated. This should be epoch time in milliseconds, ISO-8601-formatted string in UTC (e.g. 2020-05-01T10:15:30.00Z) or a JSON Path to any of above.
  • owner: Name of the owning role with -team suffix, e.g. engineers-team.
  • access: one of PUBLIC, PROTECTED or PRIVATE. See more in data access.

Optionally you can also set schema with URI of the JSON schema, overriding (or providing) the $schema key in the data. You don’t need to define the schema in Horreum ahead, though, the validation is triggered automatically whenever you add a Run or update the schema, and you’ll see the result icon in Runs/Datasets listing for given test.

The upload itself can look like:

TEST='$.info.benchmark'
START='2021-08-01T10:35:22.00Z'
STOP='2021-08-01T10:40:28.00Z'
OWNER='dev-team'
ACCESS='PUBLIC'
curl 'http://localhost:8080/api/run/data?test='$TEST'&start='$START'&stop='$STOP'&owner='$OWNER'&access='$ACCESS \
    -s -X POST -H 'content-type: application/json' \
    -H 'Authorization: Bearer '$TOKEN \
    -d @/path/to/data.json

Assuming that you’ve created the test let’s try to upload this JSON document:

{
  "$schema": "urn:my-schema:1.0",
  "info": {
    "benchmark": "FooBarTest",
    "ci-url": "https://example.com/build/123"
  },
  "results": {
    "requests": 12345678,
    "duration": 300 // the test took 300 seconds
  }
}

When you open Horreum you will see that your tests contains single run in the ‘Run Count’ column.

Tests List

Click on the run count number with open-folder icon to see the listing of all runs for given test:

Runs List

Even though the uploaded JSON has $schema key the Schema column in the table above is empty; Horreum does not know that URI yet and can’t do anything with that. You can hit the run ID with arrow icon in one of the first columns and see the contents of the run you just created:

Run Details

This page shows the Original Run and an empty Dataset #1. The Dataset content is empty because without the Schema it cannot be used in any meaningful way - let’s create the schema and add some labels.

6 - Define Functions

Defining a Function in Horreum is commonly used to modify structure and generate new values

Prerequisites: You have already

  1. created a Test
  1. uploaded some data

Using Functions in Horreum is a feature that provides a great deal of bespoke functionality to Horreum that is under the control of a user. The ability to use a Function written in JavaScript.

These Functions can be categorized as:

  • Selector (filter) - used for applying conditions on input data to return output data
  • Transformation - used for changing the data model
  • Combination - used for computing a scalar value
  • Rendering - reformatting the presentation of the data

When using Horreum you will find Functions used in these Horreum objects:

Function TypeHorreum ObjectUse
SelectorTestExperiment, Timeline, Fingerprint, Variable
ReportFiltering, Category, Series, Scale Label
TransformationSchemaTransformer
ReportComponents
CombinationSchemaLabel, Transformer
RenderingTestView
ReportCategory, Series, Scale

Making use of Horreum Functions

JavaScript ECMAScript 2023 specification is available throughout Horreum Functions.

Example Filtering Function

These Functions rely on a condition evaluation to return a boolean value. The following will filter based on the individual Label Extractor only having the value 72.

value => value === 72

Example Transformation Functions

Transformation Functions rely on a returned value that is an Object, Array or scalar value. This Transformation Function relies on 12 Extractors setup on the Schema Label. Each Extractor configured to obtain an Array of data items (except buildId and buildUrl).

Input JSON

{
  "runtimeName": "spring-native",
  "rssStartup": 55,
  "maxRss": 15,
  "avBuildTime": 1234,
  "avTimeToFirstRequest": 5,
  "avThroughput": 25,
  "rssFirstRequest": 5000,
  "maxThroughputDensity": 15,
  "buildId": "x512",
  "buildUrl": "http://acme.com",
  "quarkusVersion": "0.1",
  "springVersion": "3.0"
}

This Transformation Function uses the map JavaScript function to modify property names, the number of JSON properties and values. In the transformation runtime and buildType are created from the filtered runtimeName property. The version property is conditionally derived from runtimeName depending on the presence of the text spring.

({runtimeName, rssStartup, maxRss, avBuildTime, avTimeToFirstRequest, avThroughput, rssFirstRequest, maxThroughputDensity, buildId, buildUrl, quarkusVersion, springVersion}) => {
    var map = runtimeName.map((name, i) => ({
        runtime: name.split('-')[0],
        buildType: name.split('-')[1],
        rssStartup: rssStartup[i],
        maxRss: maxRss[i],
        avBuildTime: avBuildTime[i],
        avTimeToFirstRequest: avTimeToFirstRequest[i],
        avThroughput: avThroughput[i],
        rssFirstRequest: rssFirstRequest[i],
        maxThroughputDensity: maxThroughputDensity[i],
        buildId: buildId,
        buildUrl: buildUrl,
        version: ((name.split('-')[0].substring(0, 6) == 'spring' ) ? springVersion: quarkusVersion )
    }))
    return map;
}

Output JSON

{
  "runtime": "spring",
  "buildType": "native",
  "rssStartup": 55,
  "maxRss": 15,
  "avBuildTime": 1234,
  "avTimeToFirstRequest": 5,
  "avThroughput": 25,
  "rssFirstRequest": 5000,
  "maxThroughputDensity": 15,
  "buildId": "x512",
  "buildUrl": "http://acme.com",
  "version": "3.0"
}

Example Combination Functions

Combination Functions rely on a returned value that is an Object, Array or scalar value.

Input JSON

[5,10,15,20,10]

This Function will conditionally reduce an array of values unless there is only a single value of type number.

value => typeof value === "number" ? value : value.reduce((a, b) => Math.max(a, b))

Output JSON

20

The following example returns a scalar Float value.

Input JSON

{
  "duration": "62.5",
  "requests": "50"
}

This Function will create a value of the amount of time per request with the exponent rounded to 2 figures.

value => (value.duration / value.requests).toFixed(2)

Output JSON

1.25

Example Rendering Functions

A Rendering Function will change the presentation or add metadata for rendering in the UI.

Input JSON

Hello World

This Rendering Function adds HTML markup and sets the color of the span text.

value => '<span style="color: Tomato";>' + value + '</span>'

Output text

<span style="color: Tomato;">Hello World</span>

Troubleshooting Functions.

See the section dedicated to Troubleshooting Functions.

7 - Dataset Experiment Evaluation

Document explaining the use of Experiment Evaluation function in Horreum UI

Using the Dataset Experiment Evaluation View

Using the Experiment evaluation window you can quickly see the values and the relative difference of a Run. Start by initially loading the Test. Then click the Dataset list button.

Dataset List

Then select the individual Test Run you want to compare with it’s Baseline.

By navigating to the uploaded run Dataset view page you will see a button “Evaluate experiment”. Clicking this button opens a window revealing the comparison result for the current Run and the Baseline Run..

Individual Evaluation

Results show the values then the percentage difference.

Datasets Comparison View

Horreum provides multiple Run comparisons in the Dataset Comparison View. We can filter based on the Filter labels defined in the Schema.

Start by initially loading the Dataset list. Then click the button “Select for comparison”.

Select for comparison

Next the Comparison view is displayed. This is where filters are set.

Dataset selection

Select a number of Runs to be compared using the “Add to comparison” button. Then click the “Compare labels”. Displayed next is the Labels Comparison view.

Multiple Dataset comparison

Displayed here are multiple Datasets. Schema Labels can be expanded to display a bar graph representing each Run.

In this guide two things were shown. How to compare an individual Dataset. Followed by comparing multiple Datasets.

8 - Transform Runs to Datasets

Horreum stores data in the JSON format. The raw document uploaded to repository turns into a Run, however most of operations and visualizations work on Datasets. By default there’s a 1-on-1 relation between Runs and Datasets; the default transformation extracts objects annotated with a JSON schema (the $schema property) and puts them into an array - it’s easier to process Datasets internally after that. It is possible to customize this transformation, though, and most importantly - you can create multiple Datasets out of a single Run. This is useful e.g. when your tooling produces single document that contains results for multiple tests, or with different configurations. With the Run split into more Datasets it’s much easier to display and analyze these results individually.

We assume that you’ve already created a test, uploaded some data and defined the Schema. In this example we use test acme-regression with the basic schema urn:acme-schema:1.0 and uploaded JSON:

{
  "$schema": "urn:acme-schema:1.0",
  "testName": ["Test CPU1", "Test CPU2", "Test CPU3"],
  "throughput": [0.1, 0.2, 0.3]
}

Defining a Transformer

Here we will show how to define the transformation from the raw input into individual Datasets so that each testName and throughput goes to a separate set.

As the structure of the documents for individual tests (stored in Dataset) differs from the input document structure (urn:acme-schema:1.0) we will define a second Schema - let’s use the URI urn:acme-sub-schema:1.0.

Back in the acme-schema we switch to Transformers tab and add a new CpuDatasetTransformer. In this Transformer we select the acme-sub-schema as Target schema URI: the $schema property will be added to each object produced by this Transformer. An alternative would be setting the target schema manually in the Combination function below. Note that it is vital that each Transformer annotates its output with some schema - without that Horreum could not determine the structure of data and process it further.

We add two extractors: testName and _throughput_that will get the values from the raw JSON object. These values are further processed in the Combination function. If the function is not defined the result will be an object with properties matching the extractor names - the same object as is the input of this function.

As a result, the transformer will return an array of objects where each element contributes to a different DataSet.

Transformer Setup

Use transformers in the test

Each schema can define multiple Transformers, therefore we have to assign our new transformer to the acme-regression test.

Tests > Transformers

Test Transformers

After Saving the test press Recalculate datasets and then go to Dataset list to see the results. You will find 3 Datasets, each having a separate test result.

Datasets

Use labels for the fingerprint

When you’ve split the Run into individual Datasets it’s likely that for purposes of Change Detection you want to track values from each test individually. Horreum can identify such independent series through a Fingerprint: set of labels with an unique combination of values.

Go to the acme-sub-schema and define the labels testname and throughput: the former will be used for the Fingerprint, the latter will be consumed in a Change Detection Variable.

Labels

Then switch to Test > Change detection and use thos labels. The Fingerprint filter is not necessary here (it would let you exclude some Datasets from Change detection analysis.

Variables

After saving and recalculation you will see the new data-points in Changes

Change

In this guide we transformed the Run from the batch results arrays to individual Datasets. Then we extracted data using Labels and them for Change detection.

9 - Configure Change Detection

Prerequisites: You have already

  1. created a Test
  1. uploaded some data
  1. defined the Schema with some labels.

One of the most important features of Horreum is Change Detection - checking if the new results significantly differ from previous data.

Horreum uses Change Detection Variables to extract and calculate Datapoints - for each dataset and each variable it creates one datapoint.

Horreum compares recent datapoint(s) to older ones and if it spots a significant difference it emits a Change, and sends a notification to subscribed users or teams.

User can later confirm that there was a change (restarting the history from this point for the purpose of change detection) or dismiss it.

Let’s go to the test and switch to the ‘Change Detection’ tab:

User logged in

We have created one variable Throughput using the label throughput. You could use multiple labels and combine them using a Javascript function, similar to the way you’ve combined JSON Path results when defining the label. But in this case further calculation is not necessary.

One chart can display series of datapoints for multiple variables: you can use Groups to plot the variables together. The dashboard will contain one chart for each group and for each variable without a group set.

If you scroll down to the ‘Conditions’ section you’ll find the criteria used to detect changes. By default there are two conditions: the first condition effectively compares last datapoint vs. mean of datapoints since last change, the second condition compares a mean of short sliding window to the mean of preceding datapoints, with more strict thresholds. You can change or remove these or add another criteria such as checking values vs. fixed thresholds.

The default conditions do not distinguish changes causing increase or decrease in given metric. Even if the effect is positive (e.g. decrease in latency or increase in throughput) users should know about this - the improvement could be triggered by a functional bug as well.

User logged in

In the upper part of this screen you see a selection of Fingerprint labels and filter; in some cases you might use the same Test for runs with different configuration, have multiple configurations or test-cases within one run (and produce multiple Datasets out of one Run) or otherwise cause that the Datasets are not directly comparable, forming several independent series of datasets. To identify those series you should define one or more labels, e.g. label arch being either x86 or arm and label cpus with the number of CPUs used in the test. Each combination of values will form an unique ‘fingerprint’ of the Dataset, had we used 3 values for cpu there would be 6 possible combinations of those labels. When a new Dataset is added only those older Datasets with the same fingerprint will be used to check against. The filter can be used to limit what datasets are suitable for Change Detection at all - you could also add a label for debug vs. production builds and ignore debug builds.

When we hit the save button Horreum asks if it should recalculate datapoints from all existing runs as we have changed the regression variables definition. Let’s check the debug logs option and proceed with the recalculation.

User logged in

When the recalculation finishes we can click on the ‘Show log’ button in upper right corner to see what was actually executed - this is convenient when you’re debugging a more complicated calculation function. If there’s something wrong with the labels you can click on the Dataset (1/1 in this case) and display label values and calculation log using the buttons above the JSON document.

User logged in

If everything works as planned you can close the log and click the ‘Go to series’ button or use the ‘Changes’ link in navigation bar on the top and select the test in the dropbox. If the uploaded data has a timestamp older than 1 month you won’t see it by default; press the Find last datapoints button to scroll back. You will be presented with a chart with single dot with result data (we have created only one Run/Dataset so far so there’s only one datapoint).

User logged in

In order to receive notifications users need to subscribe to given test. Test owner can manage all subscriptions in the ‘Subscriptions’ tab in the test. Lookup the current user using the search bar, select him in the left pane and use the arrow to move him to the right pane.

Subscriptions

You can do the same with teams using the lists in the bottom. All available teams are listed in the left pane (no need for search). When certain users are members of a subscribed team but don’t want to receive notifications they can opt out in the middle pair of lists.

When you save the test and navigate to ‘Tests’, you can notice a bold eye icon on the very left of the row. This signifies that you are subscribed. You can click it and manage your own subscriptions here as well.

Watching

Despite being subscribed to some tests Horreum does not yet know how should it notify you. Click on your name with user icon next to the ‘Logout’ button in the top right corner and add a personal notification. Currently only the email notification is implemented; use your email as the data field. You can also switch to ‘Team notifications’ tab and set a notification for an entire team. After saving Horreum is ready to let you know when a Change is detected.

Notifications

Developers often run performance tests nightly or weekly on a CI server. After setting up the workflow and verifying that it works you might be content that you have no notifications in your mailbox, but in fact the test might get broken and the data is not uploaded at all. That’s why Horreum implements watchdogs for periodically uploaded test runs.

Go to the ‘Missing data notifications’ tab in the test and click the ‘Add new rule…’ button. If you expect that the test will run nightly (or daily) set the ‘Max staleness’ to a value somewhat greater than 24 hours, e.g. 25 h. We don’t need to filter runs using the Labels and Condition so you might keep them empty - this will be useful e.g. if you have datasets with different fingerprints. The rules are periodically checked and if there is no dataset matching the filter with start timestamp in last 25 hours the subscribed users and teams will receive a notification.

Missing Data

In the ‘General’ tab it is possible to switch off all notifications from this test without unsubscribing or changing rules. This is useful e.g. when you bulk upload historical results and don’t want to spam everyone’s mailbox.

You can continue exploring Horreum in the Actions guide.

10 - Configure Actions

In the change detection guide you’ve seen how can you inform users about changes in your project’s performance. You can use another mechanism to notify other services about noteworthy events, e.g. bots commenting on version control system, updating status page or triggering another job in CI: the webhooks. Event Actions are produced in the following situations:

  • New Run event
  • New Change Detection event
  • Experiment Result event

Additionally, Global Actions have one additional event type:

  • New Test event

This allows remote systems or users to rely on automation that can reduce the necessity of manual tasks. Since calling arbitrary services in the intranet is security-sensitive, Horreum administrators have to set up an Action Allow list of URL prefixes (e.g. domains). There are a variety of Webhook implementations provided by Horreum:

  • Generic HTTP POST method request
  • Github Issue Comment
  • Create a new Github Issue
  • Post a message to a Slack channel

As a user with the admin role you can see ‘Administration’ link in the navigation bar on the top; go there and in the Global Actions tab hit the ‘Add prefix’ button:

Define action prefix

When you save the modal you will see the prefix appearing in the table. Then in the lower half of the screen you can add global actions: whenever a new Test is created, a Run is uploaded or Change is emitted Horreum can trigger an action.

Test owners can set Actions for individual tests, too. Go to the Test configuration, ‘Actions’ tab and press the ‘New Action’ button. This works identically to the global actions.

Define test webhook

Even though non-admins (in case of global hooks) and non-owners of given test cannot see the URL it is advisable to not use any security sensitive tokens.

HTTP Web Hook Action

Horreum can issue an HTTP POST request to a registered URL prefix, using the new JSON-encoded entity as a request body. You can also use a JSON path1 wrapped in ${...}, e.g. ${$.data.foo} in the URL to refer to the object sent.

Define action prefix

GitHub Issue Create Action

Horreum can create a GitHub issue against a named user (or organization) repo on a “change/new” event type. Horreum creates a GitHub formatted markdown representing the event.

You supply the owner, repository, issue title, and a GitHub token for authentication.

Create Issue Dialog

GitHub Issue Comment Action

Horreum can add a comment to an existing GitHub issue on an “experiment_result/new” event, identifying the issue either by owner, repo, and issue number, or by a complete GitHub URI, and a GitHub token for authentication.

GitHub Issue Comment Dialog

Slack Channel Message Action

Horreum can post a comment to a Slack channel on a “test/new”, “change/new”, “experiment_result/new”, or “test/new” event. Horreum creates a formatted markdown representing the event.

You supply the Slack channel ID, and a Slack app OAuth token.

Slack Message Dialog


  1. In this case the JSON path is evaluated in the application, not in PostgreSQL, therefore you need to use the Jayway JSON Path syntax - this is a port of the original Javascript JSON Path implementation. ↩︎

11 - Re-transform a Run

Re-transforming Dataset(s) in a Run is useful after any of the following tasks are completed in Horreum

  • Changed the selection of Transformers in a Test
  • Changed a Transformer’s definition
  • Changed Schema

While re-transforming isn’t necessary for existing Dataset(s) to continue operating normally an update with the latest derived values is useful to resolve any issues with incomplete derived values. These are the steps for a Run:

  1. Navigate to the Test Run List
  2. Select the individual Run
  3. Click the blue coloured button with the text “Re-transform datasets”

Runs List

Alternatively, on the Test edit page

  1. Select the Transformers tab
  2. Click the Re-transform datasets button
  3. Accept the prompt to re-transform datasets by clicking the button

Runs List

Retrabsform confirmation prompt

12 - Define a Schema

Define a Horreum schema to provide the meta-data to allow Horreum to process Run data

Prerequisites: You have already

  1. created a Test
  1. uploaded some data

In order to extract data from the Run JSON document we need to annotate it with $schema and tell Horreum how to query it.

Make sure that you’re logged in, on the upper bar go to Schemas and click ‘New Schema’ button:

User logged in

Fill in the name and URI (urn:my-schema:1.0 if you’re using the uploaded example) and hit the Save button on the bottom.

User logged in

Switch tab to ‘Labels’ and add two labels: let’s call first ci-url where you’d extract just single item from the JSON document using PostgreSQL JSON Path $.info."ci-url". In that case you can ignore the combination function as you don’t need to combine/calculate anything. You can uncheck the ‘filtering’ checkbox as you likely won’t search your datasets based on URL.

User logged in

Combination Functions

In the situation a derived metric is necessary the Combination Function is used to calculate the value. Here we create a seperate Label called throughput and we extract two elements from the JSON document: requests using $.results.requests and duration using $.results.duration. In this case the input to the calculation function will be an object with two fields: requests and duration. You can get throughput by adding combination function (in JavaScript) in the lambda syntax value => value.requests / value.duration. Had we used only one JSON Path the only argument to the function would be directly the result of the JSON Path query.

Note that this function is calculated server-side and it is cached in the database. It is automatically recalculated when the label definition changes; the datasets are processed asynchronously, though, so there might be some lag.

User logged in

Finally hit the Save button, go to Tests and in the table click on the 1 with folder icon in Datasets column. (Note that by default a Run produces single dataset - more about this in Transform Runs into Datasets. This will bring you to the listing of datasets in this test, and this time you can see in Schema tab that the schema was recognized.

User logged in

However you still don’t see the labels - for that click on the ‘Edit test’ button and switch to ‘Views’ tab. The ‘Default’ View is already created but it’s empty; hit the ‘Add component’ button on the right side twice and fill in the columns, using the labels we’ve created before. We can further customize the ‘Throughput’ by adding the “reqs/s” suffix. Note that this function can return any HTML, this will be included into the page as-is. The rendering happens client-side and you have the dataset entity (not just the JSON document) as the second argument to the render function. Third argument would be your personal OAuth token.

It’s not necessary to turn URLs into hyperlinks, though; Horreum will do that for you.

User logged in

To see the result click on the Save button and then on ‘Dataset list’ in the upper part. Now you’ll see the CI link and throughput columns in place. If the labels are not calculated correctly you can enter the Dataset by clicking on its ID in the Run column and explore Label values through the button above the JSON document. If there was e.g. a syntax error in the Javascript function you could find an error in the ‘Labels log’ there, too.

User logged in

You might be wondering why you can’t set the JSON path directly in the view; Concepts explains why this separation is useful when the format of your data evolves. Also, label defined once can be used on multiple places, e.g. for Change Detection.