is TCP port exhaustion real? [closed]

How fast would one need to allocate ephemeral ports in order to get into the TCP port exhaustion condition?

I was told there are ~4k (older Windows), ~16k (newer Windows) or ~28k (RH Linux) ports available for the client requests. Now, is the pool of port numbers global or per remote IP address?

If they are global, since ports do not become reusable until after 240 (Windows) or 60 (RH Linux) seconds, one would need to allocate them at the rate of ~16/66/466 per sec correspondingly?

Is this correct?

In your experience, is this something I should be practically worried about?


Solution 1:

DOS attacks apart, only with poorly written applications. The basic trick to avoiding TCP port exhaustion is connection pooling, for example HTTP keep-alive. This has several beneficial effects:

  1. Fewer connections per unit of time.
  2. The first close of a connection is usually done by the client, not the server. This moves the TIME_WAIT state from the server to the client, which is using far fewer sockets and ports and can therefore tolerate it much better.

Solution 2:

Diito EJP and Sirex. Monitoring will give you a better understanding of where you stand. You can also tweak how long a socket is allowed to remain in a TIME_WAIT state. I've had to do this very thing on a system that talks to thousands of GPRS telemetry devices. They dial in, and consume ports like nobody's business. A more aggressive TIME_WAIT threshold has meant our (the) application is more stable.

There are some useful tools on MS Windows for monitoring port usage, e.g.: TCPView and ProcessExplorer (MS SysInternals). netstat -a can be slow when there are thousands of connections, so you can use netstat -an instead (this stops DNS resolution from occuring on the addresses). I can't vouch for Linux though....