The Unicode specification indicates a preference for big endian and often non-Microsoft software will use that by default. In particular when UTF-16 is encoded without a BOM, and in the absence of a higher level protocol (such as the medium declaring a byte order, as with networks and network byte orders), the byte order is big endian. However, some software does not adhere to the specification and assumes little endian when there is no BOM, so adding a BOM may be done to allow such software to work.
Isn't libiconv supposed to use platform-dependent endianness for the default utf-16 conversion?
Not as far as I know. What makes you think this?