Question

I am converting audio files of several different formats to mp3 using SoX. According to the docs, you can use the -C argument to specify compression options like the bitrate and quality, the quality being after the decimal point, for example:

sox input.wav -C 128.01 output.mp3 (highest quality, slower)

sox input.wav -C 128.99 output.mp3 (lowest quality, faster)

I expected the second one to sound terrible, however, the audio quality between the two sounds exactly the same. If that is the case, I do not understand why one performs so much slower or what I would gain by setting the compression to higher "quality".

Can someone please tell me if there is a real difference or advantage to using higher quality compression versus lower quality?

P.S. I also checked the file size of each output file and both are exactly the same size. But when hashed, each file comes out with a different hash.

Was it helpful?

Solution

The parameters are passed on to LAME. According to the LAME documentation (section “algorithm quality selection”/-q), the quality value has an effect on noise shaping and the psychoacoustic model used. They recommend a quality of 2 (i.e. -C 128.2 in SoX), saying that 0 and 1 are much slower, but hardly better.

However, the main factor determining the quality remains the bit rate. It is therefore not too surprising that there is no noticeable difference in your case.

OTHER TIPS

For me faster with simple

time sox input.mp3 -C 128 output.mp3

real 0m7.417s user 0m7.334s sys 0m0.057s

time sox input.mp3 -C 128.02 output.mp3

real 0m39.805s user 0m39.430s sys 0m0.205s

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top