Cable management between multiple racks

My rule of thumb based on years of building server rooms: Minimize cross-rack cabling as much as possible.

The 300 port rack for the edge ports is far from full so you can place the edge-switches in the same rack. This keeps most of the cabling in the same rack.

The 3 racks to the left: I presume those hold your servers. Fit a cheap gigabit switch in each. Use 2 if you have servers requiring redundant links. HP ProCurves or Dells would fit the bill nicely. If you use 1 don't forget to cable it redundantly to your cores. If you use 2 redundant switches they only need a single uplink (to different cores) each.

Link all 4 racks with copper to your cores in the comms rack. Distances don't warrant fiber and copper is still a lot cheaper. Use multiple 1 GB copper links with aggregation to increase bandwidth if needed. 10 Gbs copper might be an option too depending on your cores/switches.
If you have servers that require a 1-on-1 link with a core or ISP equipment then just run an extra UTP cable. That is not going to kill you.

If your cores are big blade-switches that also carry the edge ports for the rightmost rack seriously consider moving those cores into that rack (space permitting). No one in his right mind wants to cable 300 UTP cables between racks.

You may have to spend some money on extra switches, but that will pay for itself by minimizing future support hassles.


Please don´t view this as an answere because this is your personal choice depending on your circumstances and budget. I can describe you what we did some month before in a structure in a (I think) comparable size with very low budget.

We have to manage around 5 distinct networks and had the same number of ethernet links between the racks distibuted in our buildings which became worse and worse to maintenance. We decided to replace 3 old switches to have all of the same brand (Dell) and type, to introduce VLANs and to use optical connections for the longer dinstances. Fiber connections are very cheap nowdays and very easy to install.

The patch panel racks were connected by 2 optical links (each) to the two main switches which themselves are connected to each other by 4 copper ports (trunk). In each server rack we now have one 48 port switch and at the patch panels there are 3x 48 ports (we have two patch panel racks) and 1x 24 ports for a smaller department. The patch panel switches only have one uplink to the main switches and are interconnected by copper ports (using port trunking there too).

Now we are in the comfortable position to be able to define for each port on each switch in each rack which network it belongs to (VLAN) and don´t have to change physical connections (most of the time). Since we use virtualization it even becomes more flexible because for the VM-hosts we just need 1 physical links to attach the VMs to all networks. There is still work to do (migrate the firewalls to use VLANs instead of physical ports) but we are really happy with this solution so far and it was really cheap.