What is the maximum audio bitrate humans can distinguish? [closed]
Some audio bitrates go as high as 256kbps. But I have listen to very very clear music with 92kbps. I became suspicious that beyond a certain bitrate of x kbps the average human ear cannot tell the difference at all. What is x?
Though the question is not ubuntu specific, it came up because of a ogg open-format question that I split.
Advantages of mp3 to ogg
Bit rate per se is not distinguishable because it's not a measurement of the audio information that we hear. It's the size of information after the encoder removes what it considers inaudible (and thus, "disposable" with no or minimum perceived quality loss).
Good encoders have good psychoacoustic algorithms, meaning they wisely choose how to remove high frequencies and frequencies whose amplitude is too small to be perceived, and then packs the "chopped" wave in the given bitrate. The higher the bitrate, the less an encoder needs to chop off from the original audio.
But how distinguishable are the removed parts have more to do with the decisions made by the algorithm (its psychoacoustic model) than with the actual bitrate. Poor encoders (like ancient Xing
) will need a higher bitrate than a good one (like a modern LAME
) to achieve the same level of perceived quality. Because given the same bits, it chose poorly what to encode and what to discard.
So do not think of MP3 bitrates the same way you think of a CD bitrate. In a CD, the analog sound wave is just digitally encoded, nothing is removed. So the more bits you have the more accurate your sound wave will be. You have a 1-1 mapping from bitrate to perceived accuracy. That is not possible with MP3 (or OGG) encoding, or any lossy encoding that relies on psychoacoustic models.
Also, "distinguishable" is subjective: human hearing and high-frequency sensitivity deteriorates with age... so you may enjoy 96kpbs now, but 10 (or 20) years ago you would certainly "need" more. Different people distinguish high-frequencies (or small amplitudes) differently.. so for them a given encoder might be better than another one, even with a lower bitrate. Also, the equipment and environment plays a key role: listening to music in a car in the road is not the same as in a quiet room with high-quality headphones.
There are other factors too... specially VBR... which means the bitrate constantly changes... going up to 320 for parts of the song that require more complex encoding, and going down to 96 where nothing needs to be removed. So a VBR file of 128kpbs average will usually have much higher quality than a 160 (or even 192) constant bitrate (CBR) one.
That said, a 128kpbs VBR is perfectly fine for me, even using good-quality headphones. For CBR, 192 is enough for transparency (meaning I cant distinguish from original, lossless CD audio). I'm 33, and not an audiophile, so your mileage may vary.
An analogy:
A good analogy came to me now.. it may help understand why it's impossible determine that "X kbps is more than the human ear can distinguish":
Think of audio as a house and its furniture. You're moving to another house. Your moving truck is your bitrate: the larger it is, the more furniture you will be able to pack to your new home. But since it's a one way trip and the truck isn't big enough to hold everything, something will always be left behind, and therefore lost.
Will you be able to distinguish that something was lost? Do you agree that it depends a lot more on what was chosen to be left behind than how big the truck was, even tough a bigger truck will indeed help?
Do you agree that it's impossible to measure how big the truck must be so its "indistinguishable", unless the truck is big enough to hold all your furniture? (that would be lossless encoding, like FLAC. And that's about 5 times bigger than the largest MP3).
Final words:
Some may say that objective measurement on human hearing thresholds is possible. True, you can measure human ear for how high a frequency must be to become inaudible, or how many dB a given sound can be below the "dominant" one for it to be indistinguishable. But you can not directly translate that into bitrates, because how many bits are required to encode (or discard) that depends on how much of it is present on a given song.
Hydrogenaudio's forum has done several ABX tests on this issue. Most people can't consistently tell the difference between uncompressed source material and compressed files in the ~160kbps VBR range, but some music is really hard to compress accurately, and some people are adept at hearing the difference with lossy encodings at even 320kbps for certain music. The answer is: it depends.
Do you mean that 192 kbps is very, very clear? 92kbps is a somewhat low bit rate, and I can easily tell the difference. Unless you're listening in a noisy environment where quality doesn't matter as much (like listening to a portable device on the bus), I would avoid ever going below ~128kbps VBR with stereo music. You're sacrificing quality to save a little space, and the trade-off is not worth it in my opinion.
You can go down to some pretty low bit rates (under 80kbps, say) and maintain acceptable quality for mono music and speech.