Question

HP-UX's libc has the function fesetflushtozero to switch floating-point behavior between “gradual underflow” and “flush to zero”.

Despite combing through documentation and man pages of several Unix libc's (including glibc), I have yet to find how to achieve the same thing in other Unices. I'm particularly interest in Linux/glibc, Solaris and AIX.

Was it helpful?

Solution

As you have doubtless noted, there’s no standard way to do this (for that matter, there’s no standard definition of “flush to zero”, nor any requirement that hardware implement it). So all of the means of doing this are platform-specific. To add a few more to the list, since this is a useful reference:

  • OSX / Intel: fesetenv(_FE_DFL_DISABLE_SSE_DENORMS_ENV). Note that this only effects arithmetic done in float or double, which is done using SSE2 (hence the name); long double arithmetic is performed using the legacy x87 instructions, which do not support flushing.

  • iOS / arm: On 32-bit ARM under iOS, flush-to-zero is the default mode. You can turn it off for VFP instructions (but not for NEON) by clearing the __fpscr_flush_to_zero bit in a fenv_t object and installing that environment with fesetenv( ).

  • iOS / arm64: fesetenv(_FE_DFL_DISABLE_DENORMS_ENV).

OTHER TIPS

I'm wondering why the C99/UNIX standard function fesetround(FE_TOWARDZERO) isn't suitable for you - it's the same on all of these platforms, including HP/UX.

Platform-specific, you have:

Couldn't find anything for AIX other than fesetround() as per above.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top