Question

I know the WAV file format uses signed integers for 16-bit samples. It also stores them in little-endian order, meaning the lowest 8 bits come first, then the next, etc. Is the special sign bit on the first byte, or is the special sign bit always on the most significant bit (highest value)?

Meaning:
Which one is the sign bit in the WAV format?

++---+---+---+---+---+---+---+---++---+---+---+---+---+---+---+---++
|| a | b | c | d | e | f | g | h || i | j | k | l | m | n | o | p ||
++---+---+---+---+---+---+---+---++---+---+---+---+---+---+---+---++
--------------------------- here -> ^ ------------- or here? -> ^

i or p?

Was it helpful?

Solution

The sign bit is the most significant bit on any two's-complement machine (like the x86), and thus will be in the last byte in a little-endian format

Just cause i didn't want to be the one not including ASCII art... :)

+---------------------------------------+---------------------------------------+
|              first byte               |              second byte              |
+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+
|  0 |  1 |  2 |  3 |  4 |  5 |  6 |  7 |  8 |  9 | 10 | 11 | 12 | 13 | 14 | 15 |
+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+
   ^--- lsb                                               msb / sign bit -----^

Bits are basically represented "backwards" from how most people think about them, which is why the high byte is last. But it's all consistent; "bit 15" comes after "bit 0" just as addresses ought to work, and is still the most significant bit of the most significant byte of the word. You don't have to do any bit twiddling, because the hardware talks in terms of bytes at all but the lowest levels -- so when you read a byte, it looks exactly like you'd expect. Just look at the most significant bit of your word (or the last byte of it, if you're reading a byte at a time), and there's your sign bit.

Note, though, that two's complement doesn't exactly designate a particular bit as the "sign bit". That's just a very convenient side effect of how the numbers are represented. For 16-bit numbers, -x is equal to 65536-x rather than 32768+x (which would be the case if the upper bit were strictly the sign).

OTHER TIPS

signed int, little endian:

byte 1(lsb)       byte 2(msb)
---------------------------------
7|6|5|4|3|2|1|0 | 7|6|5|4|3|2|1|0|
----------------------------------
                  ^
                  | 
                 Sign bit

You only need to concern yourself with that when reading/writing a short int to some external media. Within your program, the sign bit is the most significant bit in the short, no matter if you're on a big or little endian platform.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top