Importing Data

Importing Data in a Text File

The lnt importreport command will import data in a simple text file format. The command takes a space separated key value file and creates an LNT report file, which can be submitted to a LNT server. Example input file:

foo.execution_time 123
bar.size 456
foo/bar/baz.size 789

The format is test-name.metric value, so execution_time and size must be valid metrics for the test suite you are submitting to.

Example:

echo -n "foo.execution_time 25\nbar.score 24.2\nbar/baz.size 110.0\n" > results.txt
lnt importreport --machine=my-machine-name --order=1234 --testsuite=nts results.txt report.json
lnt submit http://mylnt.com/db_default/submitRun report.json

LNT Report File Format

The lnt importreport tool is an easy way to import data into LNT’s test format. You can also create LNTs report data directly for additional flexibility.

First, make sure you’ve understood the underlying Concepts used by LNT.

{
    "format_version": "2",
    "machine": {
        "name": _String_      // machine name, mandatory
        (_String_: _String_)* // optional extra info
    },
    "run": {
        ("start_time": "%Y-%m-%dT%H:%M:%S",)? // optional, ISO8061 timestamp
        ("end_time": "%Y-%m-%dT%H:%M:%S",)?   // optional, ISO8061 timestamp, can equal start_time if not known.
        (_String_: _String_,)* // optional extra info about the run.
        // At least one of the extra fields is used as ordering and is
        // mandatory. For the 'nts' and 'Compile' schemas this is the
        // 'llvm_project_revision' field.
    },
    "tests": [
        {
            "name": _String_,   // test name mandatory
            (_String_: _Data_)* // List of metrics, _Data_ allows:
                                // number, string or list of numbers
        }+
    ]
}

Any optional fields provided in the run section will be associated to that run and visible in the UI when looking at that run. This allows annotating runs with useful additional information, such as the commit information related to this run or similar. This data will also be visible on charts when viewing historical results. Arbitrary data can be provided, however including too much run-related information can cause very large amounts of data to have to be transfered to display graphs.

A concrete small example is

{
    "format_version": "2",
    "machine": {
       "name": "LNT-AArch64-A53-O3__clang_DEV__aarch64",
       "hardware": "HAL 9000"
    },
    "run": {
       "end_time": "2017-07-18T11:28:23.991076",
       "start_time": "2017-07-18T11:28:33.00000",
       "llvm_project_revision": "265649",
       "compiler_version": "clang 4.0"
    },
    "tests": [
       {
           "name": "benchmark1",
           "execution_time": [ 0.1056, 0.1055 ],
           "hash": "49333a87d501b0aea2191830b66b5eec"
       },
       {
           "name": "benchmark2",
           "compile_time": 13.12,
           "execution_time": 0.2135,
           "hash": "c321727e7e0dfef279548efdb8ab2ea6"
       }
    ]
}

Given how simple it is to make your own results and send them to LNT, it is common to not use the LNT client application at all, and just have a custom script run your tests and submit the data to the LNT server in JSON format.

Default Test Suite (NTS)

The default test-suite schema is called NTS. It was originally designed for nightly test runs of the llvm test-suite. However it should fit many other benchmark suites as well. The following metrics are supported for a test:

  • execution_time: Execution time in seconds; lower is better.

  • score: Benchmarking score; higher is better.

  • compile_time: Compiling time in seconds; lower is better.

  • hash: A string with the executable hash (usually md5sum of the stripped binary)

  • mem_bytes: Memory usage in bytes during execution; lower is better.

  • code_size: Code size (usually the size of the text segment) in bytes; lower is better.

  • execution_status: A non zero value represents an execution failure.

  • compile_status: A non zero value represents a compilation failure.

  • hash_status: A non zero value represents a failure computing the executable hash.

The run information is expected to contain this:

  • llvm_project_revision: The revision or version of the compiler used for the tests. Used to sort runs.

Custom Test Suites

LNT test suite schemas define which metrics can be tracked for a test and what extra information is known about runs and machines. You can define your own test suite schemas in a yaml file. The LNT administrator has to place (or symlink) this yaml file into the servers schema directory.

Example:

format_version: '2'
name: my_suite
metrics:
- name: text_size
  bigger_is_better: false
  type: Real
- name: data_size
  bigger_is_better: false
  type: Real
- name: score
  bigger_is_better: true
  type: Real
- name: hash
  type: Hash
run_fields:
- name: llvm_project_revision
  order: true
machine_fields:
- name: hardware
- name: os
  • LNT currently supports the following metric types:

    • Real: 8-byte IEEE floating point values.

    • Hash: String values; limited to 256, sqlite is not enforcing the limit.

    • Status: StatusKind enum values (limited to ‘PASS’, ‘FAIL’, ‘XFAIL’ right now).

  • You need to mark at least 1 of the run fields as order: true so LNT knows how to sort runs.

  • Note that runs are not be limited to the fields defined in the schema for the run and machine information. The fields in the schema merely declare which keys get their own column in the database and a prefered treatment in the UI.