Question

I'm having issues trying to calculate root mean squared error in IPython using NumPy. I'm pretty sure the function is right, but when I try and input values, it gives me the following TypeError message:

TypeError: unsupported operand type(s) for -: 'tuple' and 'tuple'

Here's my code:

import numpy as np

def rmse(predictions, targets):
    return np.sqrt(((predictions - targets) ** 2).mean())

print rmse((2,2,3),(0,2,6))

Obviously something is wrong with my inputs. Do I need to establish the array before I put it in the rmse(): line?

No correct solution

OTHER TIPS

In the rmse function, try:

return np.sqrt(np.mean((predictions-targets)**2))

It says that subtraction is not defined for tuples.

Try

print rmse(np.array([2,2,3]), np.array([0,2,6]))

instead.

From Google formulas, RMSD or RSME is calculated from measured data and predicted or ground truth data for each measurement.

enter image description here

RMSD      = root-mean-square deviation (error)
i         = variable i
N         = number of non-missing data points
x_i       = actual observations time series
\hat{x}_i = estimated time series

And this is its numpy implementation using the fast norm function:

rmse = np.linalg.norm(measured - truth) / np.sqrt(len(thruth))

measured and truth must have the same shape.

As mentioned by @miladiouss np.linalg.norm(y1 - y2) / np.sqrt(len(y1)) is the fastest for pure numpy.

But, if you also use numba, that is not the fastest anymore. Benchmark using small time-series data (around 8 data points).

import numba
import numpy as np

@jit(nopython=True)
def rmse(y1, y2):
    return np.sqrt(((y1-y2)**2).mean())
# 851 ns ± 1.05 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

@jit(nopython=True)
def rmse_norm(y1, y2):
    return np.linalg.norm(y1 - y2) / np.sqrt(len(y1))
# 1.17 µs ± 3.44 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top