Should Cat6 cables be used for servers ('important hosts') rather than Cat5-E?

Solution 1:

There is currently no reason to use Cat6 cables when connecting to hosts. Cat5E is all that is required for gigabit connectivity. In fact, if you upgrade to 10GBase-T in the future, it may still require replacing Cat6 with Cat6a.

I should add that the Cat6 that I have worked with in the past was much more difficult to route than Cat5e. I'm not sure if the thicker insulation is a requirement for the spec, but it was not fun to work with.

Solution 2:

Proper termination of Cat 6 (6a) is much more critical than 5e. This will require a greater skill level for the installers. If you are simply using patch cables, then the distance is probably short. For 1Gb speeds, 5e will work fine for short runs and most long runs.

If you are using long runs through areas of high RF/EMI, consider Cat 6, otherwise Cat 5e will work fine.

If you have plans to upgrade to 10Gb, I would consider a fiber solution instead of copper in any case.

Solution 3:

Some cable vendors may not want you to know that 1000BASE-T was designed to run 100 meters on Cat5, not even Cat5e. See Panduit white paper at Cisco site.

Another under-publicized fact: you may not want 10GBASE-T (10 gigabit Ethernet over twisted pair, even Cat6A) unless you want to live with maximum round-trip delay specs that are fifty (50) times slower than 10 gig over Infiniband-style copper cabling, 10GBASE-CX4. See the IEEE 802.3-2008 standards, section four, parts 44.3 and 55.11.

Within server racks, the 15 meter max length of shielded CX4 cables should be about right, though they're not inexpensive. For longer distances at 10 gigs, fiber is the way to go.

Solution 4:

If you seriously can't get the budget for Cat6 (Cat6a, while we're at it) instead of Cat5E to all your servers, then I guess some length criterion like that is as reasonable as anything. But it really seems nonsensical to me to connect crucial servers with anything less than the best cabling you can lay hands on.