It looks as though pos
finds the positive part of a matrix. You could implement this directly with mapMatrix
pos :: (Storable a, Num a) => Matrix a -> Matrix a
pos = mapMatrix go where
go x | x > 0 = x
| otherwise = 0
Though Matlab makes no distinction between Matrix
and Vector
unlike Haskell.
But it's worth analyzing that Matlab fragment more. Per http://www.mathworks.com/help/matlab/ref/svd.html the first line computes the "economy-sized" Singular Value Decomposition of Y
, i.e. three matrices such that
U * S * V = Y
where, assuming Y
is m x n
then U
is m x n
, S
is n x n
and diagonal, and V
is n x n
. Further, both U
and V
should be orthonormal. In linear algebraic terms this separates the linear transformation Y
into two "rotation" components and the central eigenvalue scaling component.
Since S
is diagonal, we extract that diagonal as a vector using diag(S)
and then subtract a term tau
which must also be a vector. This might produce a diagonal containing negative values which cannot be properly interpreted as eigenvalues, so pos
is there to trim out the negative eigenvalues, setting them to 0. We then use diag
to convert the resulting vector back into a diagonal matrix and multiply the pieces back together to get A
, a modified form of Y
.
Note that we can skip some steps in Haskell as svd
(and its "economy-sized" partner thinSVD
) return vectors of eigenvalues instead of mostly 0'd diagonal matrices.
(u, s, v) = thinSVD y
-- note the trans here, that was the ' in Matlab
a = u `multiply` diag (fmap (max 0) s) `multiply` trans v
Above fmap
maps max 0
over the Vector
of eigenvalues s
and then diag
(from Numeric.Container
) reinflates the Vector
into a Matrix
prior to the multiply
s. With a little thought it's easy to see that max 0
is just pos
applied to a single element.