Question

I am building an iOS application.

I have the following code:

if(pbCB == 0) { //Don't divide by 0
    c = 1;
} else {
    c = sqrt(pb / pbCB) * PROTANOPIA_WBP;
}

I really want to get rid of the if statement (the code above is within a for-loop).

I know that doing floating point division by 0 and then casting that value to an unsigned char gives (using gdb to test):

//floating-point division by 0
p 10.0/0
$1 = inf

//casted to an unsigned char
p (unsigned char) (10.0/0)
$2 = 0 '\000'

What I'm wondering is if there is a way to change the definition of division by 0 so that it returns 1? I was told by a professor that this is a hardware/architecture problem and that there is no way to do it, but I wanted to see if maybe that wasn't the case. Thanks for any answers/thoughts/advise.

Was it helpful?

Solution 2

No, you can't. It's specified by the floating point specification and implemented in the hardware.

OTHER TIPS

Like Alan Stokes mentioned, you cannot efficiently change how a divide-by-zero is handled.

But depending on the semantics and ranges of the values you're working with, it might be possible to lift and/or scale your inputs so that there are no 0's left.

EDIT: By semantics, I mean the range of values that are actually produced by the camera in question. Many digital camera's are actually unable to produce the full RGB range. If it doesn't, you can use that information to shift your inputs into the desired range.

If you don't have that sort of information, or your camera does indeed produce (0,0,0)-(255,255,255), the other option is to promote your inputs to floats and shift and/or scale as desired in that format. This is going to require a little more computation, but it might be less expensive than failed branch predictions. Be sure to measure the effects on representative input samples before making any final decisions.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top