README.md 13.5 KB
Newer Older
Grant Likely's avatar
Grant Likely committed
1
# EDK2 SCT Results Parser
2

Vincent Stehlé's avatar
Vincent Stehlé committed
3
This is an external parser script for [UEFI SCT]. (WIP)
4

Vincent Stehlé's avatar
Vincent Stehlé committed
5
6
It's designed to read a `.ekl` results log from an [UEFI SCT] run, and a
generated `.seq` from [UEFI SCT] configurator.
Jeff Booher-Kaeding's avatar
V1.0?    
Jeff Booher-Kaeding committed
7

8
It will proceed to generate a Markdown file listing number of failures, passes, each test from the sequence file set that was Silently dropped, and a list of all failures and warnings.
9

Vincent Stehlé's avatar
Vincent Stehlé committed
10
[UEFI SCT]: https://uefi.org/testtools
11

12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
## Dependencies

You need to install the [PyYAML] module for the configuration file to be loaded
correctly. Depending on your Linux distribution, this might be available as the
`python3-yaml` package.
It is also recommended to install the [packaging] library for smooth version
detection. Depending on your Linux distribution, this might be available as the
`python3-packaging` package.
See [Configuration file].

If you want to generate the pdf version of this documentation or convert
markdown results to HTML, you need to install [pandoc]. See [Usage] and
[Documentation].

[PyYAML]: https://github.com/yaml/pyyaml
[packaging]: https://github.com/pypa/packaging
[pandoc]: https://pandoc.org

Grant Likely's avatar
Grant Likely committed
30
31
32
33
34
35
## Quick Start

If you're using this tool to analyze EBBR test results, use the following
command. The parsed report can be found in `result.md`.

``` {.sh}
36
37
$ ./parser.py \
                </path/to/sct_results/Overall/Summary.ekl> \
38
		contrib/v21.07_0.9_BETA/EBBR.seq
Vincent Stehlé's avatar
Vincent Stehlé committed
39
40
INFO apply_rules: Updated 200 tests out of 12206 after applying 124 rules
INFO main: 0 dropped, 1 failure, 93 ignored, 106 known u-boot limitations, 12006 pass, 0 warning
Grant Likely's avatar
Grant Likely committed
41
42
```

43
44
(The `EBBR.yaml' configuration file is used to process results by default.)

45
## Usage
46
47
48
49
50
51
52

Usage to generate a `result.md` is such:

``` {.sh}
$ python3 parser.py <log_file.ekl> <seq_file.seq>
```

53
The output filename can be specified with the `--md` option:
54
55
56
57

``` {.sh}
$ ./parser.py --md out.md ...
```
Jeffrey Booher-Kaeding's avatar
Jeffrey Booher-Kaeding committed
58

Vincent Stehlé's avatar
Vincent Stehlé committed
59
An online help is available with the `-h` option.
Jeff Booher-Kaeding's avatar
V1.0?    
Jeff Booher-Kaeding committed
60

61
The generated `result md` can be easily converted to HTML using [pandoc] with:
62
63
64
65
66

``` {.sh}
$ pandoc -oresult.html result.md
```

67
68
See [Dependencies].

69
### Custom search
Vincent Stehlé's avatar
Vincent Stehlé committed
70
For a custom Key:value search, the next two arguments *MUST be included together.* The program will search and display files that met that constraint, without the crosscheck, and display the names, guid, and key:value to the command line. `python3 parser.py <file.ekl> <file.seq> <search key> <search value>`
Jeff Booher-Kaeding's avatar
V1.0?    
Jeff Booher-Kaeding committed
71

Vincent Stehlé's avatar
Vincent Stehlé committed
72
you can use the `test_dict` below to see available keys.
73

Vincent Stehlé's avatar
Vincent Stehlé committed
74
### Sorting data
75

Vincent Stehlé's avatar
Vincent Stehlé committed
76
77
78
It is possible to sort the tests data before output using
the `--sort <key1,key2,...>` option.
Sorting the test data helps when comparing results with diff.
79

Vincent Stehlé's avatar
Vincent Stehlé committed
80
Example command:
81

Vincent Stehlé's avatar
Vincent Stehlé committed
82
83
84
85
``` {.sh}
$ ./parser.py --sort \
      'group,descr,set guid,test set,sub set,guid,name,log' ...
```
86

87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
### Filtering data

The `--filter` option allows to specify a python3 expression, which is used as a
filter. The expression is evaluated for each test; if it evaluates to True the
test is kept, otherwise it is omitted. The expression has access to the test
as dict "x".

Example command, which keeps only the failed tests:

``` {.sh}
$ ./parser.py --filter "x['result'] == 'FAILURE'" ...
```

Filtering takes place after the configuration rules, which are described below.

102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
This filtering mechanism can also be (ab)used to transform tests results.

Example command, which adds a "comment" field (and keeps all the tests):

``` {.sh}
$ ./parser.py \
      --filter "x.update({'comment': 'Transformed'}) or True" ...
```

Example command, which removes filenames prefixes in the "log" field (and keeps
all the tests):

``` {.sh}
$ ./parser.py \
      --filter "x.update({'log': re.sub(r'/.*/', '', x['log'])}) \
                or True" ...
```

120
121
122
123
124
125
126
127
128
129
130
### Keeping only certain fields

Except for the markdown and the config template formats, it is possible to
specify which tests data fields to actually write using the `--fields` option.

Example command, suitable for triaging:

``` {.sh}
$ ./parser.py --fields 'result,sub set,descr,name,log' ...
```

131
The csv format and printing to stdout can retain the fields order.
132

133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
### Collapsing duplicates

It is possible to "collapse" duplicate tests data into a single entry using the
`--uniq` option, much similar in principle to the UNIX `uniq -c` command.

This step happens after tests and fields filtering, and it adds a "count" field.

Example command, suitable for triaging:

``` {.sh}
$ ./parser.py \
      --filter "x.update({'log': re.sub(r'/.*/', '', x['log'])}) \
                or x['result'] != 'PASS'" \
      --fields 'count,result,sub set,descr,name,log' --uniq ...
```

149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
### Printing a summary

It is possible to print results to stdout using the `--print` option. This is
more useful when only few fields are printed. Example command:

Example printing command:

``` {.sh}
$ ./parser.py --fields 'result,name' --print ...
```

More condensed summaries can be obtained with further filtering.

Example summary command:

``` {.sh}
$ ./parser.py \
      --filter "x.update({'log': re.sub(r'/.*/', '', x['log'])}) \
                or x['result'] in ['FAILURE', 'WARNING']" \
      --fields 'count,result,name' --uniq --print ...
```

171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
### Re-reading markdown results

It is possible to re-read a previously generated markdown results file with the
`--input-md` option. This can be useful to perform further processing on the
tests.

Example command to read a previously generated markdown:

``` {.sh}
$ ./parser.py --input-md 'result.md' ...
```

* By default an output markdown is still generated, except in the case where the
  input and output markdown have the same filename.
* The generated markdown results do not contain the "passed" tests. They can
  therefore not be re-read.

Vincent Stehlé's avatar
Vincent Stehlé committed
188
189
## Configuration file

190
By default, the `EBBR.yaml` configuration file is used to process results. It is
191
192
intended to help triaging failures when testing specifically for [EBBR]
compliance. It describes operations to perform on the tests results,
Vincent Stehlé's avatar
Vincent Stehlé committed
193
such as marking tests as false positives or waiving failures.
194
195
It is possible to specify another configuration file with the command line
option `--config <filename>`.
Vincent Stehlé's avatar
Vincent Stehlé committed
196

197
You need to install the [PyYAML] module for the configuration file to be loaded
198
199
correctly, and installing the [packaging] library is recommended. See
[Dependencies].
Vincent Stehlé's avatar
Vincent Stehlé committed
200
201

[EBBR]: https://github.com/ARM-software/ebbr
Vincent Stehlé's avatar
Vincent Stehlé committed
202
203
204

### Configuration file format

Vincent Stehlé's avatar
Vincent Stehlé committed
205
The configuration file is in [YAML] format. It contains a list of rules:
Vincent Stehlé's avatar
Vincent Stehlé committed
206
207
208
209
210
211
212
213
214
215
216
217
218
219

``` {.yaml}
- rule: name/description (optional)
  criteria:
    key1: value1
    key2: value2
    ...
  update:
    key3: value3
    key4: value4
    ...
- rule...
```

Vincent Stehlé's avatar
Vincent Stehlé committed
220
221
[YAML]: https://yaml.org

Vincent Stehlé's avatar
Vincent Stehlé committed
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
### Rule processing

The rules will be applied to each test one by one in the following manner:

* An attempt is made at matching all the keys/values of the rule's 'criteria'
  dict to the keys/values of the test dict. Matching test and criteria is done
  with a "relaxed" comparison (more below).
  - If there is no match, processing moves on to the next rule.
  - If there is a match:
    1. The test dict is updated with the 'update' dict of the rule.
    2. An 'Updated by' key is set in the test dict to the rule name.
    3. Finally, no more rule is applied to that test.

A test value and a criteria value match if the criteria value string is present
anywhere in the test value string.
For example, the test value "abcde" matches the criteria value "cd".

You can use `--debug` to see more details about which rules are applied to the
tests.

### Sample

244
245
In the folder `sample`, a `sample.yaml` configuration file is provided as
example, to use with the `sample.ekl` and `sample.seq` files.
Vincent Stehlé's avatar
Vincent Stehlé committed
246
247
248
249

Try it with:

``` {.sh}
250
$ ./parser.py --config sample/sample.yaml sample/sample.ekl sample/sample.seq
Vincent Stehlé's avatar
Vincent Stehlé committed
251
252
```

253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
### Generating a configuration template

To ease the writing of yaml configurations, there is a `--template` option to
generate a configuration "template" from the results of a run:

``` {.sh}
$ ./parser.py --template template.yaml ...
```

This generated configuration can then be further edited manually.

* Tests with result "PASS" are omitted.
* The following tests fields are omitted from the generated rule "criteria":
  "iteration", "start date" and "start time".
* The generated rule "criteria" "log" field is filtered to remove the leading
  path before C filename.

270
271
### EBBR configuration

272
273
274
The `EBBR.yaml` file is the configuration file used by default. It is meant for
[EBBR] testing and can override the result of some tests with the following
ones:
275
276
277
278
279
280
281
282
283
284
285
286

-------------------------------------------------------------------------------
                   Result  Description
-------------------------  ----------------------------------------------------
                `IGNORED`  False-positive test failure, not mandated by [EBBR]
                           and too fine-grained to be removed from the
                           `EBBR.seq` sequence file.

`KNOWN U-BOOT LIMITATION`  Genuine bugs, which much ultimately be fixed.
                           We know about them; they are due to U-Boot FAT
                           filesystem implementation limitations and they do
                           not prevent an OS to boot.
287

288
289
290
   `KNOWN ACS LIMITATION`  Genuine bugs, which are fixed in a more recent
                           version of the ACS or which must ultimately be fixed
                           and which we know about.
291
292
293
294

   `KNOWN SCT LIMITATION`  Genuine bugs, which are fixed in a more recent
                           version of the SCT or which must ultimately be fixed
                           and which we know about.
295
296
297
298
-------------------------------------------------------------------------------

Some of the rules just add a `comments` field with some help text.

299
300
301
302
303
304
305
306
Example command to see those comments:

``` {.sh}
$ ./parser.py \
      --filter "x['result'] == 'FAILURE'" \
      --fields 'count,result,name,comments' --uniq --print ...
```

307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
### Database of sequence files

The `seq.db` file contains a list of known sequence files, which allows to
identify the input sequence file.

This database file contains lines describing each known sequence file in turn,
in the following format:

```
sha256 description
```

Everything appearing after a '#' sign is treated as a comment and ignored.

The database filename can be specified with the `--seq-db` option.

Jeffrey Booher-Kaeding's avatar
Jeffrey Booher-Kaeding committed
323
324
## Notes
### Known Issues:
Jeff Booher-Kaeding's avatar
V1.0?    
Jeff Booher-Kaeding committed
325
* "comment" is currently not implemented, as formatting is not currently consistent, should reflect the comments from the test.
Vincent Stehlé's avatar
Vincent Stehlé committed
326
* some [UEFI SCT] tests have shared GUIDs,
327
* some lines in ekl file follow Different naming Conventions
Vincent Stehlé's avatar
Vincent Stehlé committed
328
* some tests in the sequence file are not strongly Associated with the test spec.
329

330
331
### Documentation

332
333
It is possible to convert this `README.md` into `README.pdf` with [pandoc] using
`make doc`. See `make help` and [Dependencies].
334

Vincent Stehlé's avatar
Vincent Stehlé committed
335
336
### Sanity checks

337
338
339
340
341
342
343
344
345
346
347
348
To perform sanity checks, run `make check`. It runs a number of checkers and
reports errors:

-------------------------------
      Checker  Target
-------------  ----------------
     `flake8`  Python scripts.
   `yamllint`  [YAML] files.
 `shellcheck`  Shell scripts.
-------------------------------

See `make help`.
Vincent Stehlé's avatar
Vincent Stehlé committed
349

350
### db structure:
351

Vincent Stehlé's avatar
Vincent Stehlé committed
352
``` {.python}
353
354
355
356
tests = [
    test_dict,
    est_dict2...
]
Jeff Booher-Kaeding's avatar
V1.0?    
Jeff Booher-Kaeding committed
357

358
359
360
361
test_dict = {
   "name": "some test",
   "result": "pass/fail",
   "group": "some group",
Vincent Stehlé's avatar
Vincent Stehlé committed
362
   "test set": "some set",
363
364
365
   "sub set": "some subset",
   "set guid": "XXXXXX",
   "guid": "XXXXXX",
Jeff Booher-Kaeding's avatar
V1.0?    
Jeff Booher-Kaeding committed
366
   "comment": "some comment",
367
368
369
370
   "log": "full log output"
}


Vincent Stehlé's avatar
Vincent Stehlé committed
371
seqs = {
372
373
374
375
376
377
378
379
380
381
    <guid> : seq_dict
    <guid2> : seq_dict2...
}

seq_dict = {
                "name": "set name",
                "guid": "set guid",
                "Iteration": "some hex/num of how many times to run",
                "rev": "some hex/numb",
                "Order": "some hex/num"
Jeff Booher-Kaeding's avatar
V1.0?    
Jeff Booher-Kaeding committed
382
}
383
```
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414

#### Spurious tests

Spurious tests are tests, which were run according to the log file but were not
meant to be run according to the sequence file.

We force the "result" fields of those tests to "SPURIOUS".

#### Dropped tests sets

Dropped tests sets are the tests sets, which were were meant to be run according
to the sequence file but for which no test have been run according to the log
file.

We create artificial tests entries for those dropped tests sets, with the
"result" fields set to "DROPPED". We convert some fields coming from the
sequence file, and auto-generate others:

``` {.python}
dropped_test_dict = {
   "name": "",
   "result": "DROPPED",
   "group": "Unknown",
   "test set": "",
   "sub set": <name from sequence file>,
   "set guid": <guid from sequence file>,
   "revision": <rev from sequence file>,
   "guid": "",
   "log": ""
}
```
415
416
417
418
419
420
421
422

#### Skipped tests sets

Skipped tests sets are the tests sets, which were considered but had zero of
their test run according to the log file.

We create artificial tests entries for those dropped tests sets, with the
"result" fields set to "SKIPPED".
423
424
425
426
427
428
429
430
431
432
433
434

## Contributed files

A few contributed files are stored in sub-folders under `contrib` for
convenience:

-------------------------------------------------------------------------------
           Sub-folder  Contents
---------------------  --------------------------------------------------------
 `v21.05_0.8_BETA-0/`  EBBR sequence file from [ACS-IR v21.05_0.8_BETA-0].

   `v21.07_0.9_BETA/`  EBBR sequence files from [ACS-IR v21.07_0.9_BETA].
435
436

         `v21.09_1.0`  EBBR sequence files from [ACS-IR v21.09_1.0].
437
438
439
440
-------------------------------------------------------------------------------

[ACS-IR v21.05_0.8_BETA-0]: https://github.com/ARM-software/arm-systemready/tree/main/IR/prebuilt_images/v21.05_0.8_BETA-0
[ACS-IR v21.07_0.9_BETA]: https://github.com/ARM-software/arm-systemready/tree/main/IR/prebuilt_images/v21.07_0.9_BETA
441
[ACS-IR v21.09_1.0]: https://github.com/ARM-software/arm-systemready/tree/main/IR/prebuilt_images/v21.09_1.0