Program:
Downloads:
Likely, you will not manage to finish the implementation of all the functions during the excercise in the lab. Finish them as a home work.
stprpath
which sets the needed paths to the individual parts of the toolbox.
compilemex
which compiles the parts of toolbox written in C.
Answer the following questions:
svm2()
bsvm2()
svmclass()
Output of all training algorithms in the STPR toolbox is a structure called a model
. After using the svm2
function, the model
structure may look like this:
model = Alpha: [40x1 double] b: -2.3417 sv: [1x1 struct] nsv: 40 W: [64x1 double] options: [1x1 struct] kercnt: 46909 trnerr: 0 errcnt: 0 exitflag: 2 stat: [1x1 struct] cputime: 0.1993 fun: 'svmclass'
The structure contains all the information needed to use a SVM classifier:
model.Alpha
is a vector of the Lagrange multipliers alpha. Each training example has its own alpha. Many of these alpha may be zero; non-zero alpha indicates that the respective training example is actually a support vector. The vector model.Alpha
containes only non-zero values, thus its length is equal to the number of support vectors.
model.b
is the absolute term of the model (the bias).
model.W
is a weight vector of the resulting linear discriminant function. It is part of the model
structure only in case we use the linear
kernel. Using the weight vector, the discriminannt function is equal to model.W'*x + model.b
.
model.sv
is a structure containing information about the support vectors. It contains X
and y
for all support vectors and also indices of the support vectors in the original training set.
model.fun
is the name of the MATLAB function that should be used to apply the model
to a new data, i.e. to classify the data in this case. For SVMs, this field will contain either linclass
or svmclass
.
model
structure are not so important. You can find here e.g. a copy of options
used when training the model, or some statistics about the model optimization process, or the size of error on the training dataset, etc.
What exactly do we check by the following command?
all(model.sv.X * model.Alpha == model.W)
Use the scripts from the last week where you classified the wedge and XOR datasets using neural networks from NETLAB toolbox.
y
variable.pboundary()
, pareas()
and pwpatterns()
, e.g. in the following way:
% Create a new figure figure; hold on; % Fill areas that belong to classes 1 and 2, respectively ha = pareas(model); % Plot the data, distinguished by colors hx = pwpatterns(data); % Plot circles around the support vectors plot(model.sv.X(1,:), model.sv.X(2,:),'ko', 'Linewidth', 2, 'MarkerSize',8); % Plot the boundary between classes and make it thicker hl = pboundary(model); set(hl,'Linewidth',3,'color','k');
Explore:
Use the hand-written digits dataset as data for classification.
errRate()
and confmat()
to compute the misclassification rate and to display the confusion matrix.
Training of multiclass SVM can be done e.g. by bsvm2()
function.
bsvm2()
? How should the class labels be given to this function???
perceptron()
and linclass()
. Do you see the similarity with the functions trainClassLinearPerceptron()
and predClassLinear()
that you created during the Exercise 3?
demo_linclass
and demo_svm
.