Is there still a reason why binding to port < 1024 is only authorized for root on Unix systems?

Solution 1:

That port range is not supposed to be programmer defined.
It is reserved by IANA,

The Well Known Ports are those from 0 through 1023.

DCCP Well Known ports SHOULD NOT be used without IANA registration. The registration procedure is defined in [RFC4340], Section 19.9.

For differing opinions check,

  1. From a linuxquestions thread (read at the link for more context)

    The port 1024 limit actually bites itself in the tail. It forces a daemon practice that might open security holes which make the limit ineffective as a security measure.

    • The SANS Top-20 Vulnerabilities notes

      Several of the core system services provide remote interfaces to client components through Remote Procedure Calls (RPC). They are mostly exposed through named pipe endpoints accessible through the Common Internet File System (CIFS) protocol, well known TCP/UDP ports and in certain cases ephemeral TCP/UDP ports. Historically, there have been many vulnerabilities in services that can be exploited by anonymous users. When exploited, these vulnerabilities afford the attacker the same privileges that the service had on the host.


These days, protocols like BitTorrent and Skype have moved over through ephemeral ports into the unreserved space and do not bother for root access. The target is not just bypassing this old reserved-port security; Today's protocols want to bypass even the network perimeter (Skype is a good example for that). Things will go further as network bandwidth and availability increase when every computer user will probably run a web server off themselves -- and maybe, unknowingly, be part of a botnet.


We need the security desired by these old methods
but it will need to be done with newer ways now.

Solution 2:

Well, the original thinking as I recall from the days of BSD Unix was that ports < 1024 were expected to be reserved for "well known services", and the assumption was still that servers would be relatively rare, and that folks with root privileges would be presumed to be "trusted" in some way. So you had to be privileged to bind a socket to listen on a port that would represent a network service that other users would access.

Ports 1024-4999 were intended to be used as "ephemeral" ports that would represent the client's side of a TCP connection. Ports 5000+ were intended for non-root servers.

Obviously all of those assumptions went out the window pretty quickly. Check the IANA TCP port number reservation list to see just how high things have gotten.

One solution to this problem was the RPC portmapper idea. Instead of reserving a TCP port for each service, the service would start up on a random port and tell the portmapper daemon where it was listening. Clients would ask portmapper "where is service X" listening and proceed from there. I can't recall what security mechanisms were in place to protect well-known RPC services from impersonation.

I'm not sure there's a Good Reason these days for all of this, but like most of the *nix world things tend to accumulate vs. getting completely reinvented.

Anyone read Vernor Vinge? I remember him writing in one of his novels about a computer system in the far future that incorporated layers and layers of code from the ancient past, with the time still being represented by the number of seconds since some ancient date (1/1/1970 to be exact). He's probably not far off.

Solution 3:

In the old days regular users used to login to Unix machines. So you wouldn't want an average user setting up a fake ftp service or something.

These days, the typical usage is that only the admin and a few other trusted people have logins to a server, so if the model was redone today, the < 1024 restriction might not be present.