====== Benchmarking ====== ===== Task assignment ===== Your task is to measure time performance of three different implementations of matrix multiplication and compare them: - Standard multiplication - Multiplication with the second matrix transposed before the multiplication - Multiplication of the matrices in 1D representation You probably need to do the following steps: - Download the program from git repository: git clone https://gitlab.fel.cvut.cz/cuchymar/benchmarking.git - Open the source code in IDE - Use Java Microbenchmark Harness (JMH) to rigorously compare the implementations - Do the measurements on each of the implementations of the matrix multiplication according to the methodology described in [1] - summary in {{ :courses:b4m36esw:labs:benchmark.pdf |PDF}} - Determine the warm-up period for each implementation * Visual inspection of a sequence plot is sufficient - You do not have to calculate the number of repetitions * Use sufficiently large number (e.g. 40 iterations and 30 executions/forks) * If the resulting confidence interval is too wide, you need to add more repetitions. - Measure the time performance for each implementation and compute the average performance and 95% confidence interval of the measurements (Section 9.3) - Compute the comparison ratios of the implementations and 95% confidence intervals of the ratios (Section 10.1) - Upload the report (PDF) with the results together with your implementation of the benchmark (MatrixMultiplicationBenchmark.java) and the JSON file with the measurements generated by JMH. The confidence interval you are supposed to calculate are NOT the confidence interval that is shown by JMH ===== Report structure ===== The report should include the following parts: * Machine specification - CPU, memory, OS, Java version (''java -version''), etc. * Used JVM parameters (if applied) * Warm-up period: * Brief description of how the warm-up period was determined * The results (with graphs) for each implementation * Time performance: * Brief description of the measurement procedure * Results including graphs with visualization of the confidence intervals (example https://i.stack.imgur.com/HmBYh.png) * Comparison: * Brief description of how the comparison was done * Resulting ratios with the confidence intervals * Conclusions: * Summary of the results and a discussion Report templates: {{ :courses:b4m36esw:labs:benchmark_template_v2.doc |doc}} {{ :courses:b4m36esw:labs:benchmark_template_v3.zip |LaTeX}} ===== JMH ===== JMH is a Java harness for building, running, and analysing nano/micro/milli/macro benchmarks written in Java and other languages targetting the JVM. https://openjdk.java.net/projects/code-tools/jmh/ There is a plenty of tutorials, e.g.: * http://tutorials.jenkov.com/java-performance/jmh.html * https://mkyong.com/java/java-jmh-benchmark-tutorial/ * https://www.baeldung.com/java-microbenchmark-harness ===== Materials ===== * [1] https://kar.kent.ac.uk/33611/ (mainly chapters 9.3 and 10.1) * [2] Kalibera, T. and Jones, R. E. (2012) Quantifying performance changes with effect size confidence intervals. Technical Report 4–12, University of Kent * [3] The [[https://gitlab.fel.cvut.cz/B192_B4M36ESW/lectures/raw/master/esw03-benchmarking.pdf|slides]] from the third lecture * [4] Pitfalls presented at the seminar: https://www.oracle.com/technical-resources/articles/java/architect-benchmarking.html * [5] https://wiki.openjdk.java.net/display/HotSpot/MicroBenchmarks