Why is the display resolution not exactly 16:9 or 4:3?

Most displays get advertised with either 16:9 or 4:3 display ratio. However, if you compare the resolution with the display ratio, it's most often neither of both.

For example, the resolution of my notebook display is 1366x768.
But 1366/768 = 683/384 != 688/387 = 16/9
Another common resolution is 1920/1200 = 8/5

But for some resolutions it's correct:

  • 1024/768 = 4/3
  • 800/600 = 4/3

Is there a technical reason / user experience reason for this? Why do displays have other ratios than what they get advertised?

(I assume that every pixel is a perfect square. Is this assumption wrong?)


Solution 1:

Not every display resolution has to be 16:9 or 4:3.

My laptop and my TV have the well known 16:9 ratio.
My regular display has 16:10, at least they are marketed as 16:10, however the image below has them as 8:5. The broken screen that still sits on top of the locker behind me has a resolution of 5:4.

The image below shows most of the standard resolutions that are available.

source

I actually like 16:10 more than 16:9 and would pay a fair amount more money to get one of these instead. This however is personal opinion but should exemplary show you why there are not only two but a lot more standards to choose from.
Why do I like it so much? Not all movies are 16:9, there are a lot of 4:3 shows out.
When playing games I like it more to have a bit more vertical space to place menus, HUDs etc.
This of course comes down to personal preference. Personal preference between individuals is different and so are displays.

Why are displays marketed as 16:9 if they are not?
If this is done knowingly, I'd call that a scam.

Solution 2:

The exact ratio can only be obtained if the denominator is divisible by the denominator of the aspect ratio you want. 768 isn't divisible by 9, so there won't be any 16:9 integer resolution with that height. So why wasn't 1360:765 chosen?

Because dimensions of display resolutions tend to be a power of 2 (or a multiple of a power of 2 that is as large as possible), possibly because powers of 2 work better for a binary computer

  • 2D image formats as well as video codecs process the images in blocks instead of pixel-by-pixel individually or line line-by-line. The block sizes are always powers of 2 like 8×8, 16×16 or less frequently 4×8, 8×16, 4×16 because they're easier to arrange in memory, and also more suitable for the CPU's SIMD unit... That's why you'll see blocky artifacts when viewing a low quality image or video file.
  • 3D graphics renderers often use a technique called mipmapping that involves using images with sizes that are powers of 2 of each other, to increase rendering speed and reduce aliasing artifacts. If you're interested, check out How does Mipmapping improve performance?

So regardless of the graphics type, using powers of 2 eases the job of the encoder/decoder and/or GPU/CPU. Images with a non-power-of-2 side length will always have the corresponding side rounded up to a power of 2 (which you'll see later on the example of 1920×1080) and you'll end up wasting some memory at the edges for storing those dummy pixels. Transforming those odd-sized images like that also introduces artifacts (which are sometimes unavoidable) due to the dummy values. For example rotating odd-sized JPEGs will introduce noise to the result

Rotations where the image is not a multiple of 8 or 16, which value depends upon the chroma subsampling, are not lossless. Rotating such an image causes the blocks to be recomputed which results in loss of quality.

https://en.wikipedia.org/wiki/JPEG#Lossless_editing

See

  • Lossless rotation of JPEG images: There's more than one way to rotate a cat
  • Can a JPEG compressed image be rotated without a loss in quality?
  • Are "Windows Photo Viewer" rotations lossless?

Now obviously 1360:765 is precisely 16:9 as you said, but 765 isn't divisible by any power of 2, while 768 can be divisible by 256 (28), so 768 for the height is a better choice. Moreover using 768 as the height has the advantage of being able to display the old 1024×768 natively without scaling

768/(16/9) = 1365.333..., so if you round it down, you'll get a value that's closest to 16:9. However it's an odd value, so people round it up to 1366×768, which is quite close to 16:9. But again, 1366 is only divisible by 2 so some screen manufacturers use 1360×768 instead since 1360 is divisible by 16 which is much better. 1360/768 = 1.7708333... which approximates 16/9 to about 2 decimal places, and that's enough. 1360×768 also has the bonus that it fits nicely inside 1MB of RAM (whereas 1366×768 doesn't). 1344×768, another less commonly used resolution, is also divisible by 16.

WXGA can also refer to a 1360×768 resolution (and some others that are less common), which was made to reduce costs in integrated circuits. 1366×768 8-bit pixels would take just above 1-MiB to be stored (1024.5KiB), so that would not fit into an 8-Mbit memory chip and you would have to have a 16-Mbit memory chip just to store a few pixels. That is why something a bit lower that 1366 was chosen. Why 1360? Because you can divide it by 8 (or even 16) which is far simpler to handle when processing graphics (and could bring to optimized algorithms).

Why Does the 1366×768 Screen Resolution Exist?

Many 12MP cameras have effective resolution of 4000×3000, and when shooting in 16:9, instead of using the resolution 4000×2250 which is exactly 16:9, they use 4000×2248 because 2248 is divisible by 8 (which is the common block size in many video codecs), and 2250 is divisible by 2.

Some Kodak cameras use 4000×2256 too, since 2256 is divisible by 16, and 4000/2256 still approximates 16/9 to about 2 decimal places. If shooting in 3:2 they'll use 4000×2664, not 4000×2667 or 4000×2666 which are closer to 3:2, for the same reason.

Another example is 848×480 (divisible by 16) or 854×480 (divisible by 4) instead of 853×480

The 480 denotes a vertical resolution of 480 pixels, usually with a horizontal resolution of 640 pixels and 4:3 aspect ratio (480 × ​4⁄3 = 640) or a horizontal resolution of 854 or less (848 should be used for mod16 compatibility) pixels for an approximate 16:9 aspect ratio (480 × ​16⁄9 = 853.3).

https://en.wikipedia.org/wiki/480p

And this is true for other resolutions too. You'll almost never find any image resolutions that are odd. Most will be at least divisible by 4 - or better, 8. The full HD resolution, 1920×1080, has a height not divisible by 16, so many codecs will round it up to 1920×1088 instead, with 8 dummy lines of pixels, then crop it down when displaying or after processing. But sometimes it's not cropped so you can see there are many 1920×1088 videos on the net. Some files are reported as 1080 but actually 1088 inside.

You may also find the option to crop 1088 to 1080 in various video decoder's settings.

1080-line video is actually encoded with 1920×1088 pixel frames, but the last eight lines are discarded prior to display. This is due to a restriction of the MPEG-2 video format, which requires the height of the picture in luma samples (i.e. pixels) to be divisible by 16.

https://en.wikipedia.org/wiki/ATSC_standards#MPEG-2


Back to your example 1920/1200 = 8/5, it's not strange at all because it's the common 16:10 aspect ratio that is close to the golden ratio. You can find it in 1280×800, 640×400, 2560×1600, 1440×900, 1680×1050... No one would advertised it as 16:9 because they're clearly 16:10

I assume that every pixel is a perfect square. Is this assumption wrong?

That's wrong. In the past pixels are often not a square but a rectangular shape. Other pixel arrangements like hexagon do exist although not very common. See Why are pixels square?

Solution 3:

Yeah, it's to do with manufacturing.

We already made loads of 1024x768 panels, so why not just make them wider so they are 1366x768.

I'm not sure about the other one, I haven't come across panels with that resolution.