README.md 8.42 KB
Newer Older
Jeff Booher-Kaeding's avatar
Jeff Booher-Kaeding committed
1
# SCT_Parser
2

Vincent Stehlé's avatar
Vincent Stehlé committed
3
This is an external parser script for [UEFI SCT]. (WIP)
4

Vincent Stehlé's avatar
Vincent Stehlé committed
5
6
It's designed to read a `.ekl` results log from an [UEFI SCT] run, and a
generated `.seq` from [UEFI SCT] configurator.
Jeff Booher-Kaeding's avatar
V1.0?    
Jeff Booher-Kaeding committed
7

8
It will proceed to generate a Markdown file listing number of failures, passes, each test from the sequence file set that was Silently dropped, and a list of all failures and warnings.
9

Vincent Stehlé's avatar
Vincent Stehlé committed
10
[UEFI SCT]: https://uefi.org/testtools
11

12
## Usage
Vincent Stehlé's avatar
Vincent Stehlé committed
13
Usage to generate a "result md" is such. `python3 parser.py <log_file.ekl> <seq_file.seq>`
14
If you do no provided any command line arguments it will use `sample.ekl` and `sample.seq`.
Vincent Stehlé's avatar
Vincent Stehlé committed
15
The output filename can be specified with `--md <filename>`.
Jeffrey Booher-Kaeding's avatar
Jeffrey Booher-Kaeding committed
16

Vincent Stehlé's avatar
Vincent Stehlé committed
17
An online help is available with the `-h` option.
Jeff Booher-Kaeding's avatar
V1.0?    
Jeff Booher-Kaeding committed
18

19
### Custom search
Vincent Stehlé's avatar
Vincent Stehlé committed
20
For a custom Key:value search, the next two arguments *MUST be included together.* The program will search and display files that met that constraint, without the crosscheck, and display the names, guid, and key:value to the command line. `python3 parser.py <file.ekl> <file.seq> <search key> <search value>`
Jeff Booher-Kaeding's avatar
V1.0?    
Jeff Booher-Kaeding committed
21

Vincent Stehlé's avatar
Vincent Stehlé committed
22
you can use the `test_dict` below to see available keys.
23

Vincent Stehlé's avatar
Vincent Stehlé committed
24
### Sorting data
25

Vincent Stehlé's avatar
Vincent Stehlé committed
26
27
28
It is possible to sort the tests data before output using
the `--sort <key1,key2,...>` option.
Sorting the test data helps when comparing results with diff.
29

Vincent Stehlé's avatar
Vincent Stehlé committed
30
Example command:
31

Vincent Stehlé's avatar
Vincent Stehlé committed
32
33
34
35
``` {.sh}
$ ./parser.py --sort \
      'group,descr,set guid,test set,sub set,guid,name,log' ...
```
36

37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
### Filtering data

The `--filter` option allows to specify a python3 expression, which is used as a
filter. The expression is evaluated for each test; if it evaluates to True the
test is kept, otherwise it is omitted. The expression has access to the test
as dict "x".

Example command, which keeps only the failed tests:

``` {.sh}
$ ./parser.py --filter "x['result'] == 'FAILURE'" ...
```

Filtering takes place after the configuration rules, which are described below.

52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
This filtering mechanism can also be (ab)used to transform tests results.

Example command, which adds a "comment" field (and keeps all the tests):

``` {.sh}
$ ./parser.py \
      --filter "x.update({'comment': 'Transformed'}) or True" ...
```

Example command, which removes filenames prefixes in the "log" field (and keeps
all the tests):

``` {.sh}
$ ./parser.py \
      --filter "x.update({'log': re.sub(r'/.*/', '', x['log'])}) \
                or True" ...
```

70
71
72
73
74
75
76
77
78
79
80
### Keeping only certain fields

Except for the markdown and the config template formats, it is possible to
specify which tests data fields to actually write using the `--fields` option.

Example command, suitable for triaging:

``` {.sh}
$ ./parser.py --fields 'result,sub set,descr,name,log' ...
```

81
The csv format and printing to stdout can retain the fields order.
82

83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
### Collapsing duplicates

It is possible to "collapse" duplicate tests data into a single entry using the
`--uniq` option, much similar in principle to the UNIX `uniq -c` command.

This step happens after tests and fields filtering, and it adds a "count" field.

Example command, suitable for triaging:

``` {.sh}
$ ./parser.py \
      --filter "x.update({'log': re.sub(r'/.*/', '', x['log'])}) \
                or x['result'] != 'PASS'" \
      --fields 'count,result,sub set,descr,name,log' --uniq ...
```

99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
### Printing a summary

It is possible to print results to stdout using the `--print` option. This is
more useful when only few fields are printed. Example command:

Example printing command:

``` {.sh}
$ ./parser.py --fields 'result,name' --print ...
```

More condensed summaries can be obtained with further filtering.

Example summary command:

``` {.sh}
$ ./parser.py \
      --filter "x.update({'log': re.sub(r'/.*/', '', x['log'])}) \
                or x['result'] in ['FAILURE', 'WARNING']" \
      --fields 'count,result,name' --uniq --print ...
```

Vincent Stehlé's avatar
Vincent Stehlé committed
121
122
123
124
125
126
127
## Configuration file

It is possible to use a configuration file with command line option `--config
<filename>`.
This configuration file describes operations to perform on the tests results,
such as marking tests as false positives or waiving failures.

Vincent Stehlé's avatar
Vincent Stehlé committed
128
Example command for [EBBR]:
Vincent Stehlé's avatar
Vincent Stehlé committed
129
130
131
132
133

``` {.sh}
$ ./parser.py --config EBBR.yaml /path/to/Summary.ekl EBBR.seq ...
```

Vincent Stehlé's avatar
Vincent Stehlé committed
134
135
136
137
You need to install the [PyYAML] module for this to work.

[EBBR]: https://github.com/ARM-software/ebbr
[PyYAML]: https://github.com/yaml/pyyaml
Vincent Stehlé's avatar
Vincent Stehlé committed
138
139
140

### Configuration file format

Vincent Stehlé's avatar
Vincent Stehlé committed
141
The configuration file is in [YAML] format. It contains a list of rules:
Vincent Stehlé's avatar
Vincent Stehlé committed
142
143
144
145
146
147
148
149
150
151
152
153
154
155

``` {.yaml}
- rule: name/description (optional)
  criteria:
    key1: value1
    key2: value2
    ...
  update:
    key3: value3
    key4: value4
    ...
- rule...
```

Vincent Stehlé's avatar
Vincent Stehlé committed
156
157
[YAML]: https://yaml.org

Vincent Stehlé's avatar
Vincent Stehlé committed
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
### Rule processing

The rules will be applied to each test one by one in the following manner:

* An attempt is made at matching all the keys/values of the rule's 'criteria'
  dict to the keys/values of the test dict. Matching test and criteria is done
  with a "relaxed" comparison (more below).
  - If there is no match, processing moves on to the next rule.
  - If there is a match:
    1. The test dict is updated with the 'update' dict of the rule.
    2. An 'Updated by' key is set in the test dict to the rule name.
    3. Finally, no more rule is applied to that test.

A test value and a criteria value match if the criteria value string is present
anywhere in the test value string.
For example, the test value "abcde" matches the criteria value "cd".

You can use `--debug` to see more details about which rules are applied to the
tests.

### Sample

A `sample.yaml` configuration file is provided as example, to use with the
`sample.ekl` and `sample.seq` files.

Try it with:

``` {.sh}
$ ./parser.py --config sample.yaml ...
```

189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
### Generating a configuration template

To ease the writing of yaml configurations, there is a `--template` option to
generate a configuration "template" from the results of a run:

``` {.sh}
$ ./parser.py --template template.yaml ...
```

This generated configuration can then be further edited manually.

* Tests with result "PASS" are omitted.
* The following tests fields are omitted from the generated rule "criteria":
  "iteration", "start date" and "start time".
* The generated rule "criteria" "log" field is filtered to remove the leading
  path before C filename.

Jeffrey Booher-Kaeding's avatar
Jeffrey Booher-Kaeding committed
206
207
## Notes
### Known Issues:
Jeff Booher-Kaeding's avatar
V1.0?    
Jeff Booher-Kaeding committed
208
* "comment" is currently not implemented, as formatting is not currently consistent, should reflect the comments from the test.
Vincent Stehlé's avatar
Vincent Stehlé committed
209
* some [UEFI SCT] tests have shared GUIDs,
210
* some lines in ekl file follow Different naming Conventions
Vincent Stehlé's avatar
Vincent Stehlé committed
211
* some tests in the sequence file are not strongly Associated with the test spec.
212

213
214
215
216
### Documentation

It is possible to convert this `README.md` into `README.pdf` with pandoc using
`make doc`. See `make help`.
217

Vincent Stehlé's avatar
Vincent Stehlé committed
218
219
220
### Sanity checks

To perform sanity checks, run `make check`. For the moment this runs `yamllint`,
Vincent Stehlé's avatar
Vincent Stehlé committed
221
which will inspect all [YAML] files and report errors. See `make help`.
Vincent Stehlé's avatar
Vincent Stehlé committed
222

223
### db structure:
224

Vincent Stehlé's avatar
Vincent Stehlé committed
225
``` {.python}
226
227
228
229
tests = [
    test_dict,
    est_dict2...
]
Jeff Booher-Kaeding's avatar
V1.0?    
Jeff Booher-Kaeding committed
230

231
232
233
234
test_dict = {
   "name": "some test",
   "result": "pass/fail",
   "group": "some group",
Vincent Stehlé's avatar
Vincent Stehlé committed
235
   "test set": "some set",
236
237
238
   "sub set": "some subset",
   "set guid": "XXXXXX",
   "guid": "XXXXXX",
Jeff Booher-Kaeding's avatar
V1.0?    
Jeff Booher-Kaeding committed
239
   "comment": "some comment",
240
241
242
243
   "log": "full log output"
}


Vincent Stehlé's avatar
Vincent Stehlé committed
244
seqs = {
245
246
247
248
249
250
251
252
253
254
    <guid> : seq_dict
    <guid2> : seq_dict2...
}

seq_dict = {
                "name": "set name",
                "guid": "set guid",
                "Iteration": "some hex/num of how many times to run",
                "rev": "some hex/numb",
                "Order": "some hex/num"
Jeff Booher-Kaeding's avatar
V1.0?    
Jeff Booher-Kaeding committed
255
}
256
```
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287

#### Spurious tests

Spurious tests are tests, which were run according to the log file but were not
meant to be run according to the sequence file.

We force the "result" fields of those tests to "SPURIOUS".

#### Dropped tests sets

Dropped tests sets are the tests sets, which were were meant to be run according
to the sequence file but for which no test have been run according to the log
file.

We create artificial tests entries for those dropped tests sets, with the
"result" fields set to "DROPPED". We convert some fields coming from the
sequence file, and auto-generate others:

``` {.python}
dropped_test_dict = {
   "name": "",
   "result": "DROPPED",
   "group": "Unknown",
   "test set": "",
   "sub set": <name from sequence file>,
   "set guid": <guid from sequence file>,
   "revision": <rev from sequence file>,
   "guid": "",
   "log": ""
}
```
288
289
290
291
292
293
294
295

#### Skipped tests sets

Skipped tests sets are the tests sets, which were considered but had zero of
their test run according to the log file.

We create artificial tests entries for those dropped tests sets, with the
"result" fields set to "SKIPPED".