how is nvidia 3d vision achieved exactly

in my company we have been using stereoscopic shutter glasses for years together with fast crt screens able to handle vertical refresh rates > 120Hz. Lately it is getting extremely hard to find such monitors, so we decided to try out one of the new LCDs supporting 120Hz refresh, and, as promised on the nVidia site, support 3D vision. We got a Samsung 2233RZ. The way we achive stereo is displaying alternating left and right images at 120Hz (using DirectX), with shutters alternating open/close for the corresponding eye (by sending trigger synced to DirectX), so the actual image is perceived in 3D at 60Hz. This system works, without doubt.

No luck however, using the screen + our shutters as is didn't quite work:

  • the pixels on an lcd are in their on state for the duration of a frame (8mSec)
  • there is a delay of about half a frame before the frame sent out by the pc is actually drawn on the screen
  • in other words, while a shutter is open (also 8mSec), the eye sees half of the left and half of the right image

We fixed this with some hardware that compensates for the delay and shortens the period the shutters are open. So far so good: 3D perception was really good, but only for small images in the middle of the screen.

Some more measurements revealed something very surprising for us: the 2233RZ does not show the entire frame in one go (what we expected from an LCD screen: any LCD we have here, also DLP projectors, all do that), but instead writes it line by line just as a crt would do. So there is no way to get stereo working properly with shutters, because there is an 8mSec delay between the top left pixel being on and the bottom right pixel going one. Moreover, when the bottom right one is on, the top left one is already off.

The question is: how does nVidia do it, and can we do it to? The glasses from their 3D vision kit use the same principle as ours, so it must be in the screen/video card, no? How do they force the screen to show the entire frame in one go, so that all pixels go on and off at the same time? Is this something that can be set in software? Or can that only be done when using one of the GeForce cards listed as compatible for use with the 3D kit (we tested with Quadro 570, using dual link cable)? If so, is their a protocol over DVI that goes like 'hey, I'm a GeForce, you're a 120Hz screen, can you show one frame in 8mSec so we can do some stereo stuff?' and the screen responds 'yes I can do that' or 'no I cannot because you're a Quadro'?

edit: just found out there's a '3D Vision Pro' as well which belonging to the site does support the Quadro FX570. Biggest difference that the glasses use an RF instead of infrared emitter. But this would mean the pc we use meets the requirements for 3D vision.

So the 'actual' important question is (thanks to MBraedley): how do I tell the card and screen to go in 3D mode, so that the screen updates all pixels at once?

edit2: in the nVidia cpl, I set 3D settings on. Now when using the StereoView, listed in compatible apps, the app indeed reports 'stereo buffer' available so it seems everything is setup correctly. However the problem remains: depending on the amount of the delay tuned on the glasses, there's crosstalk ('ghosting') on the top, center or bottom of the screen.

UPDATE

after a lot of mailing back and forth with nVidia, and them basically claiming their system would work better than ours but they cannot tell us why because it's their intellectual property etc, we decided to just buy the 3D kit as it's pretty cheap anyway.

After some measurements it's pretty clear: they use the exact same principle as we do already for 10 years. They do not use any special tricks, and the 3D Vision perfroms far worse then our system. Only two differences:

  • software: they have some API methods that you can give two images two, and they get displayed interleaved automatically. We do this 'manually' by sending one frame after the other to the video card.
  • hardware: their glasses are pretty bad in comparision to what we use. Ghosting is really terrible with the nVidia glasses, and it's visible all the way from top to bottom: the 'closed' state of their glass is really far from closed. One thing to note here: this is about ghosting as measured using a scope. When viewing an actual scene with lot of details and no huge contrast (typically for games), the ghosting is pretty much invisible to the eye.

Solution 1:

The principle it basically the same: show one image to one eye, then another image to the other eye. However, in order to work properly, NVidia 3d (and the monitors that works with it) require a DVI-D connection, whereas I suspect the CRTs that you've been using still use analog. If you try piping in an analog signal to the 2233RZ, I have a feeling it won't render for 3d properly. Synchronization is set during the initial setup, although I'm not sure exactly how it's achieved.

I do know that the monitor is made aware of the fact that it's displaying 3D images, as most of the controls, including brightness and contrast, are disabled when in 3D mode. The monitor, however, won't complain simply because a particular card is being used. Unsupported cards simply won't work, whereas supported cards should work with all supported monitors, as long as they are connected properly and the drivers and software are installed.

Having said all that, if your application doesn't use a 3D mode that's compatible with NVidia 3D (which I find hard to believe since it uses DirectX), then NVidia won't know what to do with the images that's it's given.