Warning
This page is located in archive. Go to the latest version of this course pages.

Benchmarking

Task assignment

Your task is to measure time performance of three different implementations of matrix multiplication and compare them:

  1. Standard multiplication
  2. Multiplication with the second matrix transposed before the multiplication
  3. Multiplication of the matrices in 1D representation

You probably need to do the following steps:

  1. Download the program from git repository: git clone https://gitlab.fel.cvut.cz/cuchymar/benchmarking.git
  2. Open the source code in IDE
  3. Use Java Microbenchmark Harness (JMH) to rigorously compare the implementations
  4. Do the measurements on each of the implementations of the matrix multiplication according to the methodology described in [1] - summary in PDF
    1. Determine the warm-up period for each implementation
      • Visual inspection of a sequence plot is sufficient
    2. You do not have to calculate the number of repetitions
      • Use sufficiently large number (e.g. 40 iterations and 30 executions/forks)
      • If the resulting confidence interval is too wide, you need to add more repetitions.
    3. Measure the time performance for each implementation and compute the average performance and 95% confidence interval of the measurements (Section 9.3)
    4. Compute the comparison ratios of the implementations and 95% confidence intervals of the ratios (Section 10.1)
  5. Upload the report (PDF) together with your implementation of the benchmark (MatrixMultiplicationBenchmark.java) the JSON file with the measurements generated by JMH (has to be named: measurements.json, do not modify this file) and also a file with the results (named: results.json). The results file has to follow the following format (names has to be the ones in the measurements file without the packages):
      {
        "performance": [
          {
            "impl_name": "measureMultiply",
            "average": 274.45720624999996,
            "cf_lb": 271.6351655689061,
            "cf_ub": 273.27924693109395
          },
          {
            "impl_name": "measureMultiply1D",
             ...
          },
          {
            "impl_name": "measureMultiplyTrans",
             ...
          }
        ],
        "comparisons": [
          {
            "impl_1_name": "measureMultiply",
            "impl_2_name": "measureMultiply1D",
            "ratio": 0.9404346124708851,
            "cf_lb": 0.9374657053490532,
            "cf_ub": 0.943403519592717
          },
          {
            "impl_1_name": "measureMultiply",
            "impl_2_name": "measureMultiplyTrans",
             ...
          },
          {
            "impl_1_name": "measureMultiply1D",
            "impl_2_name": "measureMultiply",
             ...
          },
          {
            "impl_1_name": "measureMultiply1D",
            "impl_2_name": "measureMultiplyTrans",
             ...
          },
          {
            "impl_1_name": "measureMultiplyTrans",
            "impl_2_name": "measureMultiply",
             ...
          },
          {
            "impl_1_name": "measureMultiplyTrans",
            "impl_2_name": "measureMultiply1D",
             ...
          }
        ]
      }
  6. Check the automatically evaluated results in the upload system. The system shows OK if the difference of the uploaded results is less than 0.05% from the values calculated by the automatic evaluation. The automatic evaluation serves just as a check that the results you calculated are correct. The report will be evaluated manually.
The confidence intervals you are supposed to calculate are NOT the confidence intervals that is shown by JMH

Report structure

The report should include the following parts:

  • Machine specification - CPU, memory, OS, Java version (java -version), etc.
  • Used JVM parameters (if applied)
  • Warm-up period:
    • Brief description of how the warm-up period was determined
    • The results (with graphs) for each implementation
  • Time performance:
  • Comparison:
    • Brief description of how the comparison was done
    • Resulting ratios with the confidence intervals
  • Conclusions:
    • Summary of the results and a discussion

Report templates: doc LaTeX

JMH

JMH is a Java harness for building, running, and analysing nano/micro/milli/macro benchmarks written in Java and other languages targetting the JVM.

https://openjdk.java.net/projects/code-tools/jmh/

There is a plenty of tutorials, e.g.:

Materials

courses/b4m36esw/labs/lab03.txt · Last modified: 2021/03/15 13:31 by cuchymar