You are basically taking each row of GRZVV
, appending a 1 at the end, multiplying it with GinvVV
, then adding up all the elements in the vector. If you weren't doing the "append 1" thing, you could do it all with no loops as:
VVg = np.sum(np.dot(GinvVV[:, :-1], GRZVV.T), axis=-1) * VV
or even:
VVg = np.einsum('ij,kj->k', GinvVV[:, :-1], GRZVV) * VV
How do we handle that extra 1? Well, the resulting vector coming from the matrix multiplication would be incremented by the corresponding value in GinvVV[:, -1]
, and when you add them all, the value will be incremented by np.sum(GinvVV[:, -1])
. So we can simply calculate this once and add it to all the items in the return vector:
VVg = (np.einsum('ij,kj->k', GinvVV[:-1, :-1], GRZVV) + np.sum(GinvVV[:-1, -1])) * VV
The above code works if VV
is a scalar. If it is an array of shape (n,)
, then the following will work:
GinvVV = np.asarray(GinvVV)
VVgbis = (np.einsum('ij,kj->k', GinvVV[:-1, :-1]*VV[:, None], GRZVV) +
np.dot(GinvVV[:-1, -1], VV))