The leave-one-out cross validation means simply, that given your model m
, training set T
of size n
and some evaluation metric (error measure) E
you proceed as follows:
- For each point
(x,y)
fromT
:- You train your model
m
onT\(x,y)
(all points but the one taken in 1.) - You check
E( m , (x,y) )
, for example you check whetherm
is able to determiney
givenx
correctly (thenE
=0) or not (andE
=1)
- You train your model
- You compute the mean of all
E
values across all points analyzed
As the result you have a mean generalization error estimation - you checked how well your model can predict a label of one point, trained on the rest of the training set.