====== Benchmarking ====== ===== Task assignment ===== Your task is to measure the time performance of three different implementations of matrix multiplication and compare them: - Standard multiplication - Multiplication with the second matrix transposed before the multiplication - Multiplication of the matrices in 1D representation You probably need to do the following steps: - Download the program from git repository: git clone https://gitlab.fel.cvut.cz/esw/benchmarking.git - Open the source code in IDE - Use Java Microbenchmark Harness (JMH) to rigorously compare the implementations - Do the measurements on each of the implementations of the matrix multiplication according to the methodology described in [1] - summary in {{ :courses:b4m36esw:labs:benchmark.pdf |PDF}} - Determine the warm-up period for each implementation * Visual inspection of a sequence plot is sufficient - You do not have to calculate the number of repetitions * Use sufficiently large numbers (e.g. 40 iterations and 30 executions/forks) * If the resulting confidence interval is too wide, you need to add more repetitions. - Measure the time performance for each implementation and compute the average performance and 95% confidence interval of the measurements (Section 9.3) - Compute the comparison ratios of the implementations and 95% confidence intervals of the ratios (Section 10.1) - Upload the report (PDF) together with your benchmark implementation (MatrixMultiplicationBenchmark.java), the JSON file with the measurements generated by JMH (has to be named: ''measurements.json'', do not modify this file), and also a file with the results (named: ''results.json''). The results file has to follow the following format (names has to be the ones in the measurements file without the packages): { "performance": [ { "impl_name": "measureMultiply", "average": 274.45720624999996, "cf_lb": 271.6351655689061, "cf_ub": 273.27924693109395 }, { "impl_name": "measureMultiply1D", ... }, { "impl_name": "measureMultiplyTrans", ... } ], "comparisons": [ { "impl_1_name": "measureMultiply", "impl_2_name": "measureMultiply1D", "ratio": 0.9404346124708851, "cf_lb": 0.9374657053490532, "cf_ub": 0.943403519592717 }, { "impl_1_name": "measureMultiply", "impl_2_name": "measureMultiplyTrans", ... }, { "impl_1_name": "measureMultiply1D", "impl_2_name": "measureMultiply", ... }, { "impl_1_name": "measureMultiply1D", "impl_2_name": "measureMultiplyTrans", ... }, { "impl_1_name": "measureMultiplyTrans", "impl_2_name": "measureMultiply", ... }, { "impl_1_name": "measureMultiplyTrans", "impl_2_name": "measureMultiply1D", ... } ] } - Check the automatically evaluated results in the upload system. The system shows OK if the difference of the uploaded results is less than 0.05% from the values calculated by the automatic evaluation. The automatic evaluation serves just as a check that the results you calculated are correct. The report will be evaluated manually. The confidence intervals you are supposed to calculate are NOT the confidence intervals that are shown by JMH ===== Report structure ===== The report should include the following parts: * Machine specification - CPU, memory, OS, Java version (''java -version''), etc. * Used JVM parameters (if applied) * Warm-up period: * Brief description of how the warm-up period was determined * The results (with graphs) for each implementation * Time performance: * Brief description of the measurement procedure * Results including graphs with visualization of the confidence intervals (example https://i.stack.imgur.com/HmBYh.png) * Comparison: * Brief description of how the comparison was made * Resulting ratios with the confidence intervals * Conclusions: * Summary of the results and a discussion Report templates: {{ :courses:b4m36esw:labs:benchmark_template_v2.doc |doc}} {{ :courses:b4m36esw:labs:benchmark_template_v3.zip |LaTeX}} ===== JMH ===== JMH is a Java harness for building, running, and analyzing nano/micro/milli/macro benchmarks written in Java and other languages targetting the JVM. https://github.com/openjdk/jmh There is a plenty of tutorials, for example: * http://tutorials.jenkov.com/java-performance/jmh.html * https://mkyong.com/java/java-jmh-benchmark-tutorial/ * https://www.baeldung.com/java-microbenchmark-harness ===== Materials ===== * [1] https://kar.kent.ac.uk/33611/ (mainly chapters 9.3 and 10.1) * [2] Kalibera, T. and Jones, R. E. (2012) Quantifying performance changes with effect size confidence intervals. Technical Report 4–12, University of Kent * [3] The [[https://gitlab.fel.cvut.cz/esw/lectures/raw/master/esw03-benchmarking.pdf|slides]] from the second lecture * [4] Pitfalls presented at the seminar: https://www.oracle.com/technical-resources/articles/java/architect-benchmarking.html * [5] https://wiki.openjdk.java.net/display/HotSpot/MicroBenchmarks