Structured Cabling Server Racks
Solution 1:
This will work, but what's the advantage to you over cleaning up existing cables and keeping the switches in the local racks (do you have lots of cross connects between the racks that could be eliminated?).
Remember that patch panels don't magically make your wiring neater: Discipline, maintenance, and lots of velcro ties do that.
Generally, separating your racks can be a good thing, particularly since patch panels usually come with nice solid trunks from panel to panel (less junk under the floor or in your cable trays).
The big downside is that if you lose link on a switch your now have a lot more to troubleshoot (is it the cable from the server to the local patch panel, the panel-to-panel trunk, the cable from the patch panel to the switch, the switch itself, the server itself, etc.).
The smaller downside is having to open two racks to connect a server to a switch. This can be argued as an increase in security however (someone with keys to the switch rack needs to be around to connect new equipment.
Small bit of advice no matter what you decide to do: Document the hell out of your cabling - ESPECIALLY if using patch panels. You will thank yourself later when you need to figure out what path a server takes to get to a switch port. (There are a few questions here on cable labeling schemes - https://serverfault.com/questions/64259/what-is-the-most-effective-solution-you-used-to-label-cables is one of them)
Solution 2:
I've personally been involved in wiring data centers with both methods you've described and I have to say that having the switch in the rack has proven to be a much better solution. Both installation and maintenance are easier with the switch in rack and unless you plan to have server counts well below your in-rack switch port density, switch in rack will most likely be cheaper.
Here's a few of the advantages I've come accross for the switch in rack solution
- Requires less inter-rack cabling - 1(+backups/bonding) uplink for each core switch instead of 1 per host. This is HUGE.
- Less time to implement
- Fewer cables to test
- Lower cost (assuming you can size your rack switches appropriately)
- Requires 1 patch cable instead of 2 - Each cable that's run is another place that requires testing
- Less documentation - At a minimum each cable needs to be labeled and 1 is easier then 2
- Patch cables fully traceable - Oversights happen and documentation gets missed, in a single rack its much easier to trace out a cable (but still no fun)
- Easier server removals/moves - Patch panelling requires a lot of trust in your documentation. Switch in rack I pull the cable from the server cut the end off and feed it back up to the switch and remove. No trust/guesswork back at the patch panel
- Fewer data errors - Patch paneling has 6 points to punch/crimp before reaching a switch, switch in rack has 2. Each crimp/punch point is a place where signal is lost. This is less of an issue with 100Mb then 1Gb+. I've also found it easier to certify an rj45 crimp then a panel punchdown.
- Quicker Switch Inventory - Printing out a config for a single switch and verifying a rack is much easier then printing out all the configs for all the switches and verifying
- Less clutter - The patch panel solution requires a lot of cables in a small space and if you're seeing clutter with 1 rack of servers (~40 cables) imagine if you had 160+ all in one place
This isn't to say I'm completely against patch paneling, but I try to limit its usage to places where I can't bring the switches to the equipment. Wiring office cubicles comes to mind as a perfect place to utilize patch paneling, but in the datacenter I'd encourage you to get as close to 0 patch panels as possible.