What is the difference between -nativeScale and -scale on UIScreen in iOS8+?

Solution 1:

Both scale and nativeScale tell you how many pixels a point corresponds to. But keep in mind that points are rendered to an intermediate buffer of pixels, which is then resized to match the screen resolution. So, when we ask, "1 pt corresponds to how many pixels?" it might mean intermediate pixels (scale) or the final pixels (nativeScale).

On an iPhone 6 Plus (or equivalently sized device), scale is 3, but nativeScale is 2.6. This is because content is rendered at 3x (1 point = 3 pixels) but then the resulting bitmap is scaled down, resulting in 1 point = 2.6 pixels.

So scale deals with the intermediate bitmap, and nativeScale deals with the final bitmap.


This is without display zoom. If you enable display zoom, scale remains the same, at 3, since the intermediate buffer is still rendered at 1 point = 3 pixels. But native scale becomes 2.8.

So, if you want to check the physical screen, use scale. For example, if you have an app that runs only on the iPhone Plus, you could do:

if scale != 3 {
  print("Not supported")
}

Not:

if nativeScale != 2.6 {
  print("Not supported")
}

The second code fragment fails to do what was expected when the user enables display zoom.

Solution 2:

The nativeBounds and nativeScale properties are mostly meant for use with OpenGL and represent the actual pixel size and the points-to-pixels scaling factor that you’d use to draw to precisely the screen’s resolution, allowing you to avoid the additional rendering cost of drawing at the virtual 1242×2208 size. For instance, with a CAEAGLLayer, you’d do this:

theGLLayer.contentsScale = [UIScreen mainScreen].nativeScale;

…and then only have to render its content at the size of the nativeBounds, i.e. 1080×1920.

The sample logs in that question are from the simulator, which as always is not guaranteed to behave identically to an actual device.