====== Homework 3 ====== ===== Task ===== The task is to implement efficient matrix operations (such as `+, -, *`) for two special matrix types; the diagonal matrix and the identity matrix. You will do this by defining new methods for these types, which take advantage of the special form of these matrices. Your task is to implement the following types: * ''Diagonal'', * ''Identity''. We will only work with 2D square matrices. Therefore, you will implement both new types as subtypes of ''AbstractMatrix{T}'' (as show in the template). Note that ''AbstractMatrix{T}'' type is an alias for ''AbstractArray{T, 2}''. struct Diagonal{T} <: AbstractMatrix{T} struct Identity{T} <: AbstractMatrix{T} Implement the types so that they can be created as d = [1, 2, 3] # diagonal D = Diagonal(d) # diagonal matrix with `d` as the diagonal n = 3 I = Identity(n) # identify matrix of size `n` For both types, implement new methods for the following `Base` functions for working with the matrices (as shown in the template) * ''getindex'': for indexes i,j, returns the corresponding element, * ''size'': returns a tuple of dimension sizes. Note that when you implement these two methods correctly for our special matrix types, some basic operations will already work, for example, summing two diagonal matrices would work. This is thanks to your new types being defined as subtypes of ''AbstractArray'', which already has some default methods defined for these operations. However, these will not be efficient as they do not take advantage of our special matrix types. d = Diagonal(...) d + d # works Implement another Base function (as shown in the template) * ''sum'': returns sum of the elements, * ''convert'': works for ''Diagonal'' and ''Identity'' matrix in the same way as for standard matrices (''convert(Matrix{Float32}, [1 2; 3 4])'' converts the original matrix of type ''Matrix{Float64}'' to ''Matrix{Float32}''). Additionally, implement the following matrix operations: * ''diagonal'': returns the diagonal as a vector, * ''trace'': returns the trace of the matrix. Don't forget to implement these for the original ''Matrix'' type as well. Then, you should also define ''Base'' functions ''+, -, *'' for both new matrix types, similarly as there exist methods like ''(+)(x::Matrix, y::Matrix)''. Implement ''+, -, *'' * for all combinations of matrix inputs (`+,-,*`), * for matrix and vector (only `*`), * for matrix and scalar (only `*`). //Do not implement broadcasting (element-wise) variations.// Every method should be implemented as computationally-effective as possible. Make sure the return type is correct as well, meaning that sum of two instances of ''Diagonal'' is a ''Diagonal'', while the sum of ''Matrix'' and ''Diagonal'' is of a Matrix type. Matrices should be also element-type stable and promote with standard promotion rules (''promote(Int64, Float64) = Float64'' etc.). Finally, implement extra constructors to convert between the types, where it makes sense * ''Diagonal(::AbstractMatrix)'' should return a ''Diagonal'' matrix with only the diagonal of the given ''AbstractMatrix'', * ''Matrix(::Diagonal)'' should materialize a standard ''Matrix'' with zeros on the non-diagonal positions, * ''Matrix(::Identity)'' should materialize a standard ''Matrix'' with zeros on the non-diagonal positions and ones on the diagonal. Three categories of tests will be performed: * completeness: all methods are implemented for all types, * correctness: methods return the correct outputs (and correct output types), * efficiency: **methods are computationally efficient** and make use of the matrix properties (certain methods will be timed and benchmarked). ===== Tips ===== Remember that you have a limited number of uploads and there are a lot of things that can go wrong with your implementation. Make sure that you test your types and methods as thoroughly as possible and think first. In general, try to return the minimal output type possible for the matrices. Imagine that ''Identity < Diagonal < Matrix''. If your function returns a matrix that is diagonal in nature (elements outside the diagonal are 0), you should return ''Diagonal'' type. Test which methods you //think you need to implement// already work after implementing the basic functionality (and return the correct output), so you don't have to implement them manually and potentially just make more mistakes. However, think about (or benchmark) whether the default methods are efficient or you need to implement a specialized one. You can use ''BenchmarkTools.jl'' package to benchmark your methods, for example, against operations with the standard ''Matrix''. You can also benchmark against ''LinearAlgebra'' package, but we don't want that performance from you. Just be faster on Diagonal operations than on full matrix with allocated zeros outside the diagonal. The specialized methods should be more efficient in terms of asymptotic complexity - not just a linear speedup. Therefore, if you test with large enough matrices, you should detect a significant improvement.