Draft: benchmarks: Integrate schbench benchmark to fastpath
Add the necessary files for integrating the schbench workloads to Fastpath benchmarks. schbench benchmark evaluates scheduler performance by measuring wakeup latencies, request latencies, and throughput under varying thread and runtime configurations.
Update benchmarks library with base-scaling.yaml plan to run basic tests under schbench, utilizing minimal schbench configurations. Define the suite, name, type & container image path of the schbench suite. The plan supports params such as thread count range (start, end, multiplicative step), runtime, & sleeptime for setting the schbench basic test cases as required by the user.
Add a Dockerfile to create a containerized environment for running schbench benchmark suites. The DockerFile installs dependencies, clones the schbench repository, and sets up exec.py script as the container entry point.
Implement exec.py script to handle benchmark execution and result processing. To start off, generate schbench test cases with minimal configuration by setting thread count, runtime & sleeptime only. For this base-scaling suite, set the message & worker thread counts to the same value for each test case. Generate test cases for thread counts scaling from 1 to 2 * cpu_count, iterating by multiples of 4, with runtime set to 10 ms & sleeptime set to 1000 us as the default configuration. The script can also accept these params as user input from plan, & generate test cases based on the thread scaling range provided. Then, parse key metrics such as wakeup latency (P99), request latency (P99), and average requests per second from output files. Process & save each test case result by splitting it into Fastpath result classes corresponding to the key result metrics.
Signed-off-by: Aishwarya Rambhadran aishwarya.rambhadran@arm.com