I ran this test that uses only your vector quantities:
Sd = [ ...
749.158805838953 ...
848.621203222693 ...
282.57250570754 ...
1.69002068665559 ...
529.068503515487
];
u = [ ...
0.308500000000039 ...
0.291030000000031 ...
0.38996000000005 ...
0.99272999999926 ...
0.271120000000031 ...
];
K = [ ...
3.80976148470781e-009 ...
3.33620420353532e-009 ...
1.67593037457502e-008 ...
7.22952172629158e-005 ...
9.89028880679124e-009 ...
];
r = sqrt(K).*u.*Sd;
min_r = min(r);
max_r = max(r);
disp(min_r);
disp(max_r - min_r);
And I got this result:
0.0143
3.2960e-17
To me this looks like there's no real precision loss, but your vectors are rigged in such a way that they will return approximately the same values. I mean, when the value is of order of magnitude 10^−2, errors of order of magnitude 10^−17 are fairly small, near the representation precision of doubles (16 decimal digits). And the double floating point precision losses should be far less a concern compared to, for example, precision loss when converting to/from decimal representation. So the questions are: 1) are your data sources reliable and/or precise? 2) are you sure that the element-wise product of the three vectors shouldn't return a uniform-value vector?
LaterEdit
We show only the vector dependency and ignore the scalars, because they contribute to all vector components by the same factor. We'll use '~' to express proportionality between vector components. Then, according to your formulas:
Ki ~ ui−2 × Sdi−2
Tpi ~ Ki−1/2 × ui−1 × Sdi−1
By plugging the first formula into the second, one gets:
Tpi ~ (ui−2 × Sdi−2)−1/2 × ui−1 × Sdi−1
or, after some trivial algebraic manipulation:
Tpi ~ ui(−2×−1/2) × Sdi(−2×−1/2) × ui−1 × Sdi−1
Tpi ~ ui × Sdi × ui−1 × Sdi−1
Tpi ~ 1i
So, yes, your resulting vector Tp is supposed to have all the components with the same value; this is not the result of an accident or a precision limitation. This is because the way you compute either K, or Tp, or both.