a pci-e card doesn't work with the other two graphic cards

A PC machine in the company has three 16x pci-e slots, let's call them A B C (A is closest to the CPU, C is furthest). There has been two graphic cards installed in it (gtx680 in A and gtx560 in C, all used for some kind of gpu computing) and all work well.

Then the new pci-e card is developed by ourselves and is used to acquire some data from the outside world.

Then the problem comes, if all three slots are used (install the new card in B), the PC will fail to start up in most cases (The new card can't be recognized by OS when the PC can start up occasionally).

Then we did some experiments:

  1. If we remove the graphic card in A, the PC will start up without any problem, the new card works well either in A or B.

  2. If we remove the new card and install the gtx680 in B, the PC will start up also (by observing the keyboard status and HD activities indicator), no display though, some configuration may be needed.

  3. If we install the new card in A and gtx680 in B, the PC also failed to start up.

So, what's the possible cause for the problem? Do I need to do more research? about what?

I hope I've made myself understood and any suggestion would be appreciated.

Edit:

The PSU we're using is labeled 1000W, also, when we're using the two gpu configuration, the pc can work well for a whole day without any problem when the 680 is full utilized by some cuda application.

Edit2:

The card we developed can work in other normal pc (lenovo or dell) which has onboard graphic card only. The problem machine isn't from those well-known brand like dell, lenovo, etc. Its mainboard is labled with EVGA.


Shameless rip off of Chris S's excellent answer to a similar question over at Server Fault regarding the PCI-e spec:

What should be: The PCIe spec states that all slots start at 1x and neotiate how many lanes they can use. It shouldn't matter who has more, some slots are designed to take larger cards and smaller cards fit in larger slots. Whatever the highest speed both sides can communicate at (both the number of lanes and the clock/version), that is the speed that will be negotiated and used.

What really is: Usually what should happen is what actually happens. But there are quite a few boards (especially enthusiast boards) that do not follow spec. Some motherboards will not use anything but a 16x video card in their first PCIe slot. Others will not auto-negotiate correctly. In server grade hardware these problems are very rare, but it happens.

Basically, a lot of motherboards do not follow the PCI-e spec to the letter. I've had issues with an 8x RAID controller. This particular desktop motherboard only had one PCI-e x16 slot and the rest were x4 or x1; so my only choice was the x16 slot. But the motherboard would only accept graphics cards in that slot, so we ended up having to replace that motherboard with a higher end board.

Now, you've made no mention of where this hardware is running. If it's running in a brand-name server, or has a general server-level motherboard (thinking Super Micro, Tyan, etc) then this is probably not your issue.

But if this is running in a Gigabyte or Asus motherboard (or the like), then I suggest simply trying another motherboard, as it is entirely possible that it only permits graphics cards in x16 slots, or only accepts some weird combination of graphics/other cards in the slots.