Will multiple switches slow transfer speed

Solution 1:

If with 'transfer speed' you mean throughput: It should not matter much.

Every extra device will introduce some minor latency (after all some processing is needed, if if it is only very minor). However latency is not the same as throughput.

Compare it with a conversation via a satellite phone. There will be a 3 second lag before someone else can comment on what you said, but if one person just keep talking, telling long (2GB) stories then the slow down will be minimal.

Which means that I would test these setups:

     +-48 port switch ------ 40 computers
B    |
a    +-48 port switch ------ 40 computers
c    |
k    +-48 port switch ------ 40 computers
p    |
l    +-48 port switch ------ 40 computers
a    |
n    ...
e    |
     +-48 port switch ------ 40 computers

Many switches have a connection which allows you to turn several separate units units into one giant switch. That makes management much easier. Much sure that the switches you buy have this feature.

Why 48 ports switches?
It limit the number of devices. (less space, less devices which can break down).

Why 40 computers per 48 port switch?
Future expandability (Computers moving to different rooms increasing local density, added devices such as printers, a free port for debugging etc. etc.

Why not a single 300 port switch?
Good luck finding those...

[Edit] Apparently there are some. I looked up the model mentioned by David, it is about 25K US$... Use these kinds of switches if you absolutely need maximum performance.

If you already have switches without an backplane link you could always to something like this, but that would mean traffic would flow excessively to whatever switch hosts your file-server. That might overload that switch and with will introduce much more latency than needed.


                 1 fileserver
40 computers     39 computers     ...  40 computers
   | | |               | | |              | | |
48 port switch   48 port switch   ...  48 port switch
    |        |     |         |             |       | 
    |        +-----+         +--        ---+       |   Disabled by 
    |                                              |   default
    +----------------------------------------------+

(The long roundabout cable is in case a switch dies. That would cut off all computers on it and to the side from the switch with the fileserver. In which case switches with spanning tree protocol can detect this and automatically enable the workaround link.)

Lastly, there is always the classical tiered setup:

        Fileserver and other servers
                     |
                 CORE SWITCH
                /   |        \
               /    |         \
 48 port switch   switch  ...  48 port switch
      | | |       | | |                | | |
  40 computers    computers   ...  40 computers

This one has the advantage that you have one (very good) switch in the server room, and at least one link from that switch to each floor or each section.

Then you set up a local room with all the switches for that floor. (If needed with multiple switched, tied via a backlink).

Solution 2:

Every extra step of switching is an extra delay. No matter how fast your core is, it's still processing. That said, at only 2GB a day you won't notice it, and I'm sure that 300 port switches don't exist.

Now if you were using hubs, that would be a very different story.

Switches only send packets to the IP address tagged on the packet. Hubs bounce packets around every computer, and it's up to the computer to accept or reject.

If you're really concerned about speed, you should look at making your data store as efficient as possible. If it only has a single gigabit connection, you'll always be limited there. (300 gigabit connections to 1 gigabit source = trouble)

Edit: I should add a solution the the issue I identify here. What I have done is build a computer with two Intel NICs (Network Interface Cards) and enable the Teaming feature. This enables the two cards to work as one, essentially creating a 2 gigabit network interface.

Solution 3:

If I use a switch on every 50 computers would it slow down the connection speed?

Your topology won't change the "connection speed", but the effective throughput would be affected.
Another consideration is the type of switch(es) you install.
An Ethernet switch can use either of two techniques for receiving and then transmitting the Ethernet frames:

  • store-and-forward (the entire frame is received & buffered before it is re-transmitted), or
  • cut-through (aka wire speed) (only the destination address has to be received & buffered before re-transmission is initiated).

For a full length Ethernet frame of 1542 bytes and 100Base-T, a store-and-forward switch would introduce a latency of about 123 microseconds, whereas a cut-through switch would introduce a latency of about 1.2 microseconds. For short frames (e.g. ARP packets and TCP Acks) the difference is of course much smaller.

As you add tiers of switches, you could be adding significant amounts of latency to the transmissions. Consider the case of one more layer than the ideal "flat" model (of just one (monster) switch):

                   |
                 Switch_A
                 /      \
                /        \
          Switch_B      Switch_C
            /               \ 
        Host_1            Host_200

For a full length Ethernet frame of 1542 bytes and 100Base-T, three store-and-forward switches would add latency of about 369 microseconds, whereas three cut-through switches would add latency of about 3.7 microseconds.
If Host_1 starts transmitting a full length Ethernet frame of 1542 bytes at 100Base-T with three store-and-forward switches in the path, then Host_200 receives the last byte about 492 microseconds later; that's an effective throughput of about 25 Mbps (compared to the actual wire speed of 100 Mbps).
With three cut-through switches in the path, then Host_200 receives the last byte about 127 microseconds later; that's an effective throughput of about 97 Mbps.

If you want the best throughput possible. then you need to use as few switches as possible (one monster switch is ideal) and use cut-through switches (to minimize the latency each switch introduces). Note that almost all low-cost switches are the slower (i.e. longer latency) store-and-forward variety