Question

I have some values that belong in [-1,1]. I don't need a lot of accuracy but I will need a LOT of those values. Now I'm more of a hardware guy so the solution came to me effortlessly: use fixed point arithmetic. I'm hoping to save memory by using the 8-bit Java byte type, which gives an accuracy of 2^(-7)=0.0078125. Is there a way already available to do this and take care of truncating/over(under)flow issues?

I also want to avoid as much computational overhead as possible, because those values are going to be into a lot of computation as well.

Thanks

Was it helpful?

Solution

Is there a way already available to do this and take care of truncating/over(under)flow issues?

It depends on how you want to deal with the these issues. What you seem to be proposing is to treat the byte values as scaled values. Now Java doesn't have any builtin support for scaled numbers, so you are going to have to do your arithmetic carefully ... taking care of truncation, underflow and scale adjustment yourself.

It can be done ... if you are careful.


But is it worth it?

First thing to consider is that a byte field or local variable takes exactly the same space as an int or float field ... 32 bits. (Or potentially more on a 64bit machine.)

In fact you will only save memory if the bytes are actually members of a byte[].

Then you have to ask yourself if the effort of achieving the space reduction is really worth it. Have you measured how many of these scaled byte values there are going to be? Have you compared it against the other memory usage in your application? Do you even know how many of these scaled byte values need to be represented?

I also want to avoid as much computational overhead as possible, because those values are going to be into a lot of computation as well.

There's the problem. Arithmetic with scaled values will require extra instructions, especially if you want to detect overflow / underflow. That will tend to make your application slower.


I would be inclined to implement the application simply using float which will take care of all of the overflow and underflow issues automatically. Then run the application on real data to see how fast it is, and how much memory it uses:

  • If both are acceptable, leave it alone.
  • If memory usage is too great or speed are too slow, THEN look at ways to fix this. If you decide to try the scaled number approach:
    • implement the key computations using float and byte
    • test to get the scaled arithmetic code corrected, and
    • benchmark both versions carefully to quantity the differences.

I can't predict what the results will be. But I can tell you that a lot of people waste time optimizing code that doesn't need to be optimized. Don't make that mistake - don't optimize prematurely.

OTHER TIPS

I understand your problem like this. Following code gives performance also due to final variables.

final byte x = -1; final byte y = 1;

Remember, with byte data types you must be little careful.

byte b1 = x*y; gives error. Do as follows.

byte b1 = (byte) (x*y);

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top