Transatlantic ping faster than sending a pixel to the screen?

John Carmack tweeted,

I can send an IP packet to Europe faster than I can send a pixel to the screen. How f’d up is that?

And if this weren’t John Carmack, I’d file it under “the interwebs being silly”.

But this is John Carmack.

How can this be true?

To avoid discussions about what exactly is meant in the tweet, this is what I would like to get answered:

How long does it take, in the best case, to get a single IP packet sent from a server in the US to somewhere in Europe, measuring from the time that a software triggers the packet, to the point that it’s received by a software above driver level?

How long does it take, in the best case, for a pixel to be displayed on the screen, measured from the point where a software above driver level changes that pixel’s value?


Even assuming that the transatlantic connection is the finest fibre optics cable that money can buy, and that John is sitting right next to his ISP, the data still has to be encoded in an IP packet, get from the main memory across to his network card, from there through a cable in the wall into another building, will probably hop across a few servers there (but let’s assume that it just needs a single relay), gets photonized across the ocean, converted back into an electrical impulse by a photosensor, and finally interpreted by another network card. Let’s stop there.

As for the pixel, this is a simple machine word that gets sent across the PCI express slot, written into a buffer, which is then flushed to the screen. Even accounting for the fact that “single pixels” probably result in the whole screen buffer being transmitted to the display, I don’t see how this can be slower: it’s not like the bits are transferred “one by one” – rather, they are consecutive electrical impulses which are transferred without latency between them (right?).


The time to send a packet to a remote host is half the time reported by ping, which measures a round trip time.

The display I was measuring was a Sony HMZ-T1 head mounted display connected to a PC.

To measure display latency, I have a small program that sits in a spin loop polling a game controller, doing a clear to a different color and swapping buffers whenever a button is pressed. I video record showing both the game controller and the screen with a 240 fps camera, then count the number of frames between the button being pressed and the screen starting to show a change.

The game controller updates at 250 Hz, but there is no direct way to measure the latency on the input path (I wish I could still wire things to a parallel port and use in/out Sam instructions). As a control experiment, I do the same test on an old CRT display with a 170 Hz vertical retrace. Aero and multiple monitors can introduce extra latency, but under optimal conditions you will usually see a color change starting at some point on the screen (vsync disabled) two 240 Hz frames after the button goes down. It seems there is 8 ms or so of latency going through the USB HID processing, but I would like to nail this down better in the future.

It is not uncommon to see desktop LCD monitors take 10+ 240 Hz frames to show a change on the screen. The Sony HMZ averaged around 18 frames, or 70+ total milliseconds.

This was in a multimonitor setup, so a couple frames are the driver's fault.

Some latency is intrinsic to a technology. LCD panels take 4-20 milliseconds to actually change, depending on the technology. Single chip LCoS displays must buffer one video frame to convert from packed pixels to sequential color planes. Laser raster displays need some amount of buffering to convert from raster return to back and forth scanning patterns. A frame-sequential or top-bottom split stereo 3D display can't update mid frame half the time.

OLED displays should be among the very best, as demonstrated by an eMagin Z800, which is comparable to a 60 Hz CRT in latency, better than any other non-CRT I tested.

The bad performance on the Sony is due to poor software engineering. Some TV features, like motion interpolation, require buffering at least one frame, and may benefit from more. Other features, like floating menus, format conversions, content protection, and so on, could be implemented in a streaming manner, but the easy way out is to just buffer between each subsystem, which can pile up to a half dozen frames in some systems.

This is very unfortunate, but it is all fixable, and I hope to lean on display manufacturers more about latency in the future.


Some monitors can have significant input lag

Accounting for an awesome internet connection compared to a crappy monitor and video card combo its possible

Sources:

Console Gaming: The Lag Factor • Page 2

So, at 30FPS we get baseline performance of eight frames/133ms, but in the second clip where the game has dropped to 24FPS, there is a clear 12 frames/200ms delay between me pulling the trigger, and Niko beginning the shotgun firing animation. That's 200ms plus the additional delay from your screen. Ouch.

A Display can add another 5-10ms

So, a console can have upto 210ms of lag

And, as per David's comment the best case should be about 70ms for sending a packet


It is very simple to demonstrate input lag on monitors, just stick an lcd next to a crt and show a clock or an animation filling the screen and record it. One can be a second or more behind. It is something that LCD manufacturers have tightened up on since gamers, etc have noticed it more.

Eg. Youtube Video: Input Lag Test Vizio VL420M