Pregunta

Is there a method in numpy for calculating the Mean Squared Error between two matrices?

I've tried searching but found none. Is it under a different name?

If there isn't, how do you overcome this? Do you write it yourself or use a different lib?

¿Fue útil?

Solución

You can use:

mse = ((A - B)**2).mean(axis=ax)

Or

mse = (np.square(A - B)).mean(axis=ax)
  • with ax=0 the average is performed along the row, for each column, returning an array
  • with ax=1 the average is performed along the column, for each row, returning an array
  • with ax=None the average is performed element-wise along the array, returning a scalar value

Otros consejos

This isn't part of numpy, but it will work with numpy.ndarray objects. A numpy.matrix can be converted to a numpy.ndarray and a numpy.ndarray can be converted to a numpy.matrix.

from sklearn.metrics import mean_squared_error
mse = mean_squared_error(A, B)

See Scikit Learn mean_squared_error for documentation on how to control axis.

Even more numpy

np.square(np.subtract(A, B)).mean()

Just for kicks

mse = (np.linalg.norm(A-B)**2)/len(A)

Another alternative to the accepted answer that avoids any issues with matrix multiplication:

 def MSE(Y, YH):
     return np.square(Y - YH).mean()

From the documents for np.square:

Return the element-wise square of the input.

The standard numpy methods for calculation mean squared error (variance) and its square root (standard deviation) are numpy.var() and numpy.std(), see here and here. They apply to matrices and have the same syntax as numpy.mean().

I suppose that the question and the preceding answers might have been posted before these functions became available.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top