Built-in targets¶
SPEC¶
-
class
infra.targets.
SPEC2006
(source_type, source, patches=[], toolsets=[], nothp=True, force_cpu=0, default_benchmarks=['all_c', 'all_cpp'])[source]¶ The SPEC-CPU2006 benchmarking suite.
Since SPEC may not be redistributed, you need to provide your own copy in
source
. We support the following types forsource_type
:isofile
: ISO file to mount (requiresfuseiso
to be installed)mounted
: mounted/extracted ISO directoryinstalled
: pre-installed SPEC directory in another projecttarfile
: compressed tarfile with ISO contentsgit
: git repo containing extracted ISO
The
--spec2006-benchmarks
command-line argument is added for the build and run commands. It supports full individual benchmark names such as ‘400.perlbench’, and the following benchmark sets defined by SPEC:all_c
: C benchmarksall_cpp
: C++ benchmarksall_fortran
: Fortran benchmarksall_mixed
: C/Fortran benchmarksint
: integer benchmarksfp
: floating-point benchmarks
Mutiple sets and individual benchmarks can be specified, duplicates are removed and the list is sorted automatically. When unspecified, the benchmarks default to
all_c all_cpp
.The following options are added only for the run command:
--benchmarks
: alias for--spec2006-benchmarks
--test
: run the test workload--measuremem
: use an alternative runscript that bypassesrunspec
to measure memory usage--runspec-args
: passed directly torunspec
Parallel builds and runs using the
--parallel
option are supported. Command output will end up in theresults/
directory in that case. Note that even though the parallel job may finish successfully, you still need to check the output for errors manually using thereport
command.The
--iterations
option of the run command is translated into the number of nodes per job when--parallel
is specified, and to--runspec-args -n <iterations>
otherwise.The report command analyzes logs in the results directory and reports the aggregated data in a table. It receives a list of run directories (
results/run.X
) as positional arguments to traverse for log files. By default, the columns list runtimes, memory usages, overheads, standard deviations and iterations. The computed values are appended to each log file with the prefix[setup-report]
, and read from there by subsequent report commands if available (see alsoRusageCounters
). This makes log files portable to different machines without copying over the entire SPEC directory. The script depends on a couple of Python libraries for its output:pip3 install [--user] terminaltables termcolor
Some useful command-line options change what is displayed by
report
:TODO: move some of these from below to general report command docs
--fields
changes which data fields are printed. A column is added for each instance for each field. The options are autocompleted and default to status, overheads, runtime, memory usage, stddevs and iterations. Custom counter fields from runtime libraries can also be specified (but are not autocompleted).--baseline
changes the baseline for overhead computation. By default, the script looks for baseline, clang-lto or clang.--csv
/--tsv
change the output from human-readable to comma/tab-separated for script processing. E.g., use in conjunction withcut
to obtain a column of values.--nodes
adds a (possibly very large) table of runtimes of individual nodes. This is useful for identifying bad nodes on the DAS-5 when some standard deviations are high while using--parallel prun
.--ascii
disables UTF-8 output so that output can be saved to a log file or piped toless
.
Finally, you may specify a list of patches to apply before building. These may be paths to .patch files that will be applied with
patch -p1
, or choices from the following built-in patches:- dealII-stddef Fixes error in dealII compilation on recent compilers
when
ptrdiff_t
is used without includingstddef.h
. (you basically always want this) - asan applies the AddressSanitizer patch, needed to make
-fsanitize=address
work on LLVM. - gcc-init-ptr zero-initializes a pointer on the stack so that type analysis at LTO time does not get confused.
- omnetpp-invalid-ptrcheck fixes a code copy-paste bug in an edge case of a switch statement, where a pointer from a union is used while it is initialized as an int.
Name: spec2006
Parameters: - source_type (str) – see above
- source (str) – where to install spec from
- patches (typing.List[str]) – patches to apply after installing
- toolsets (typing.List[str]) – approved toolsets to add additionally
- nothp (bool) – run without transparent huge pages (they tend to introduce
noise in performance measurements), implies
Nothp
dependency ifTrue
- force_cpu (int) – bind runspec to this cpu core (-1 to disable)
- default_benchmarks (typing.List[str]) – specify benchmarks run by default
-
custom_allocs_flags
= ['-allocs-custom-funcs=Perl_safesysmalloc:malloc:0.Perl_safesyscalloc:calloc:1:0.Perl_safesysrealloc:realloc:1.Perl_safesysfree:free:-1.ggc_alloc:malloc:0.alloc_anon:malloc:1.xmalloc:malloc:0.xcalloc:calloc:1:0.xrealloc:realloc:1']¶ list
Command line arguments for the built-in-allocs
pass; Registers custom allocation function wrappers in SPEC benchmarks.
-
class
infra.targets.
SPEC2017
(source_type, source, patches=[], nothp=True, force_cpu=0, default_benchmarks=['intspeed_pure_c', 'intspeed_pure_cpp', 'fpspeed_pure_c'])[source]¶ The SPEC-CPU2017 benchmarking suite.
Since SPEC may not be redistributed, you need to provide your own copy in
source
. We support the following types forsource_type
:isofile
: ISO file to mount (requiresfuseiso
to be installed)mounted
: mounted/extracted ISO directoryinstalled
: pre-installed SPEC directory in another projecttarfile
: compressed tarfile with ISO contentsgit
: git repo containing extracted ISO
The following options are added only for the run command:
--benchmarks
: alias for--spec2017-benchmarks
--test
: run the test workload--measuremem
: use an alternative runscript that bypassesrunspec
to measure memory usage--runspec-args
: passed directly torunspec
Parallel builds and runs using the
--parallel
option are supported. Command output will end up in theresults/
directory in that case. Note that even though the parallel job may finish successfully, you still need to check the output for errors manually using thereport
command.The
--iterations
option of the run command is translated into the number of nodes per job when--parallel
is specified, and to--runspec-args -n <iterations>
otherwise.The report command analyzes logs in the results directory and reports the aggregated data in a table. It receives a list of run directories (
results/run.X
) as positional arguments to traverse for log files. By default, the columns list runtimes, memory usages, overheads, standard deviations and iterations. The computed values are appended to each log file with the prefix[setup-report]
, and read from there by subsequent report commands if available (see alsoRusageCounters
). This makes log files portable to different machines without copying over the entire SPEC directory. The script depends on a couple of Python libraries for its output:pip3 install [--user] terminaltables termcolor
Some useful command-line options change what is displayed by
report
:TODO: move some of these from below to general report command docs
--fields
changes which data fields are printed. A column is added for each instance for each field. The options are autocompleted and default to status, overheads, runtime, memory usage, stddevs and iterations. Custom counter fields from runtime libraries can also be specified (but are not autocompleted).--baseline
changes the baseline for overhead computation. By default, the script looks for baseline, clang-lto or clang.--csv
/--tsv
change the output from human-readable to comma/tab-separated for script processing. E.g., use in conjunction withcut
to obtain a column of values.--nodes
adds a (possibly very large) table of runtimes of individual nodes. This is useful for identifying bad nodes on the DAS-5 when some standard deviations are high while using--parallel prun
.--ascii
disables UTF-8 output so that output can be saved to a log file or piped toless
.
Name: spec2017
Parameters: - source_type (str) – see above
- source (str) – where to install spec from
- patches (typing.List[str]) – patches to apply after installing
- nothp (bool) – run without transparent huge pages (they tend to introduce
noise in performance measurements), implies
Nothp
dependency ifTrue
- force_cpu (int) – bind runspec to this cpu core (-1 to disable)
- default_benchmarks (typing.List[str]) – specify benchmarks run by default
Web servers¶
-
class
infra.targets.
Nginx
(version, build_flags=[])[source]¶ The Nginx web server.
Name: nginx Parameters: version – which (open source) version to download
Juliet¶
-
class
infra.targets.
Juliet
(mitigation_return_code=None)[source]¶ The Juliet Test Suite for C/C++.
This test suite contains a large amount of programs, categorized by vulnerability type (CWE). Most programs include both a “good” and “bad” version, where the good version should succeed (no bug) whereas the bad version should be detected by the applied mitigation. In other words, the good version tests for false positives, and the bad version for false negatives.
The
--cwe
command-line argument specifies which CWEs to build and/or run, and can be a CWE-ID (416
orCWE416
) or an alias (e.g.,uaf
). A mix of CWE-IDs and aliases is allowed.The Juliet suite contains multiple flow variants per test case. These are different control-flows in the program, that in the end all arrive at the same bug. This is only relevant for static analysis tools, and for run-time mitigations these are unsuitable. In particular, some flow variants (e.g., 12) do not (always) trigger or reach the bug at runtime. Therefore, by default only flow variant 01 is used, but others can be specified with the
--variants
command-line argument.By default, a good test is counted as successful (true negative) if its returncode is 0, and a bad test is counted as successful (true positive) if its returncode is non-zero. The latter behavior can be fine-tuned via the
mitigation_return_code
argument to this class, which can be set to match the returncode of the mitigation.Each test receives a fixed string to stdin. Tests that are based on sockets are currently not supported, as this requires running two tests at the same time (a client and a server).
Tests can be built in parallel (using
--parallel=proc
), since this process might take a while when multiple CWEs or variants are selected. Running tests in parallel is not supported (yet).Name: juliet Parameters: mitigation_return_code (int or None) – Return code the mitigation exits with, to distinguish true positives for the bad version of testcases. If None
, any non-zero value is considered a success.