Can a average computer configured as a router support any number of users (connected clients)? [closed]

I am planning to install a router for a rather big amount of users (50-100), and I have found home routers very unstable for this high number of devices.
When researching, I have found this report about the Cisco's kind of router I must use as a function of the number of connected clients.

But I have a computer that I am not using right now, and its specifications are not bad:

  • Intel i5.
  • 8GB-RAM.
  • 128GB SSD hard disk for the operating system.
  • Standard additional hard disk for data.

I have in the past been able to install Ubuntu Linux Server with Zentyal, and it seemed to work during months as a router with NAT and other extra features (firewalling, balance load, statistics... etc). I was wondering if it will do the work this time instead of needing to buy such rather expensive Cisco devices (>1.500USD for 50 clients).

So my questions are:

  • Can a decent computer be configured as a gateway router (I would prefer for it work as a NAT and DHCP server at least) that will support a big number of connected computers? How many?
  • If adding Firewalling, Traffic Shaping, Port Forwarding, VPN or some other extra services, would it decrease the number of supported connected computers? How much?

Note:

  • We are not talking about wireless clients at all. Just cable connections.

If you just need it to do routing, then it all boils down to one question: How many packets per second can the CPU process.

In order to load-test that, you can put a switch on each side and attach a few computers to each switch for generating traffic. Then measure the number of packets you can push through. You should perform at least three measurements - one with minimum sized packets, one with maximum sized packets, and one with a representative mixture.

This will give you measurements of how much traffic it can handle. How many users it can handle depends on how much traffic each user need.

The drawback compared to a real router is that it will have to do all the routing on the CPU, which can become a bottleneck. But as long as you know how that it can handle more packets per second than you need, then that is not going to be a problem.

Once you add tasks that require additional processing, the CPU and memory requirements will be going up. Those tasks could be NAT, firewall, DPI, proxying, etc. Those will make it harder to load-test your setup because the processing time for each packet will vary a lot more that it would if you only need routing.

In some cases those advanced tasks may give your computer an advantage compared to a real router. The real router has a chip that is specialized for routing packets and nothing else. If the majority of packets to be routed require too complicated processing for that specialized chip, the router will lose its advantage. And then it comes down to the CPU, and your computer might have a more powerful CPU than the router you would otherwise be using.

Any stateful processing is going to make the situation even more complicated. Any NAT, firewall, and proxy functionality is usually implemented in a stateful manner. For those the amount of memory matters, and it matters for how long time the state is kept in memory. Every router with stateful handling of packets is an obstacle to reliability. There is no single answer on how to overcome those obstacles.

None of the requirements mentioned would require any significant amount of storage. For reliability, I would have the machine boot from a RAID-1 across the two drives. As for performance it shouldn't make a difference, since as soon as the machine has booted, it should hardly ever touch the storage again.

Running a DHCP server is not going to require a lot of processing power. With all the other requirements you already have, adding a DHCP server is such a minor task, that you probably won't notice any difference in the requirements for the machine.


I've used an old 3GHz Pentium 4 w/2GB RAM as a router + firewall for 500+ users with full-duplex gigabit uplinks in every direction (we even had 1Gbps internet service), and it never skipped a beat.

Cisco's gear is robust, but not terribly powerful considering how hateful their terms and pricing are. So, I wouldn't be too spooked by numbers they're throwing out as provisioning guidelines.

If you want to do anything beyond pure routing (firewalling, NAT, DHCP, etc.) on the same box, stay away from Linux. Iptables/netfilter is a disaster. I'd go with raw OpenBSD, or a BSD-based firewall distribution like OPNsense or pfSense.

2021 Edit

With eBPF & nftables, Linux firewalling is pretty nice now. Those features were bleeding-edge when I wrote the original answer.