Search
The task is to implement efficient matrix operations (such as `+, -, *`) for two special matrix types; the diagonal matrix and the identity matrix. You will do this by defining new methods for these types, which take advantage of the special form of these matrices. Your task is to implement the following types:
Diagonal
Identity
We will only work with 2D square matrices. Therefore, you will implement both new types as subtypes of AbstractMatrix{T} (as show in the template). Note that AbstractMatrix{T} type is an alias for AbstractArray{T, 2}.
AbstractMatrix{T}
AbstractArray{T, 2}
struct Diagonal{T} <: AbstractMatrix{T} struct Identity{T} <: AbstractMatrix{T}
Implement the types so that they can be created as
d = [1, 2, 3] # diagonal D = Diagonal(d) # diagonal matrix with `d` as the diagonal n = 3 I = Identity(n) # identify matrix of size `n`
For both types, implement new methods for the following `Base` functions for working with the matrices (as shown in the template)
getindex
size
Note that when you implement these two methods correctly for our special matrix types, some basic operations will already work, for example, summing two diagonal matrices would work. This is thanks to your new types being defined as subtypes of AbstractArray, which already has some default methods defined for these operations. However, these will not be efficient as they do not take advantage of our special matrix types.
AbstractArray
d = Diagonal(...) d + d # works
Implement another Base function (as shown in the template)
sum
convert
convert(Matrix{Float32}, [1 2; 3 4])
Matrix{Float64}
Matrix{Float32}
Additionally, implement the following matrix operations:
diagonal
trace
Don't forget to implement these for the original Matrix type as well.
Matrix
Then, you should also define Base functions +, -, * for both new matrix types, similarly as there exist methods like (+)(x::Matrix, y::Matrix). Implement +, -, *
Base
+, -, *
(+)(x::Matrix, y::Matrix)
Do not implement broadcasting (element-wise) variations.
Every method should be implemented as computationally-effective as possible. Make sure the return type is correct as well, meaning that sum of two instances of Diagonal is a Diagonal, while the sum of Matrix and Diagonal is of a Matrix type. Matrices should be also element-type stable and promote with standard promotion rules (promote(Int64, Float64) = Float64 etc.). Finally, implement extra constructors to convert between the types, where it makes sense
promote(Int64, Float64) = Float64
Diagonal(::AbstractMatrix)
AbstractMatrix
Matrix(::Diagonal)
Matrix(::Identity)
Three categories of tests will be performed:
Remember that you have a limited number of uploads and there are a lot of things that can go wrong with your implementation. Make sure that you test your types and methods as thoroughly as possible and think first.
In general, try to return the minimal output type possible for the matrices. Imagine that Identity < Diagonal < Matrix. If your function returns a matrix that is diagonal in nature (elements outside the diagonal are 0), you should return Diagonal type.
Identity < Diagonal < Matrix
Test which methods you think you need to implement already work after implementing the basic functionality (and return the correct output), so you don't have to implement them manually and potentially just make more mistakes. However, think about (or benchmark) whether the default methods are efficient or you need to implement a specialized one.
You can use BenchmarkTools.jl package to benchmark your methods, for example, against operations with the standard Matrix. You can also benchmark against LinearAlgebra package, but we don't want that performance from you. Just be faster on Diagonal operations than on full matrix with allocated zeros outside the diagonal. The specialized methods should be more efficient in terms of asymptotic complexity - not just a linear speedup. Therefore, if you test with large enough matrices, you should detect a significant improvement.
BenchmarkTools.jl
LinearAlgebra