What does serverperfmode=1 actually do on macOS?

Turning it on is described here but there are no details.

There is a vague description:

Performance mode changes the system parameters of your Mac. These changes take better advantage of your hardware for demanding server applications.

What is actually changing inside the system/kernel?


Turning on Server Performance Mode essentially increases some kernel/net parameters related to the max number of possible/allowed processes and connections and modifies some memory/timer settings:

...
kern.maxvnodes: 66560 > 300000
kern.maxproc: 1064 > 5000
...
kern.maxfilesperproc: 10240 > 150000
kern.maxprocperuid: 709 > 3750
kern.ipc.maxsockbuf: 4194304 > 8388608
...
kern.ipc.somaxconn: 128 > 1024
...
kern.ipc.nmbclusters: 32768 > 65536
...
kern.ipc.sbmb_cnt_peak: 1120 > 1170
...
kern.ipc.njcl: 10920 > 21840
...
kern.timer.longterm.qlen: 100 > 0
kern.timer.longterm.threshold: 1000 > 0
...
net.inet.ip.maxfragpackets: 1024 > 2048
...
net.inet.tcp.tcbhashsize: 4096 > 8192
...
net.inet.tcp.fastopen_backlog: 10 > 200
...
net.inet6.ip6.maxfragpackets: 1024 > 2048
...
net.inet6.ip6.maxfrags: 2048 > 4096
#and some very special vm page-outs/compressor and and memory/cache settings

The goal is to allow more open files (especially needed for web servers) and connections to serve more clients at the same time and discarding single server threads faster from memory/virtual memory (if I interpret certain modifications correctly).


In the past, Apple released a different OS and now that server loads on top of the consumer OS some basic tuning can help the operating system run processes for 25 users that connect to a server instead of being tuned for one person using the OS. These tunings are just a starting point - anyone that wants their server to perform under high load needs to customize and monitor things at a far more detailed level than having performance mode on or off.

Also, these limits are mostly for preventing bad software from bringing down a server by exhausting limited resources like inter process communications signaling channels (ipc). On a system where one user is running, you want to halt a runaway process sooner than if there are dozens of processes running for dozens of users. The "performance" can be seen as raising some hard limits as opposed to "serve one file or one web page faster".


Server Performance Mode (a.k.a. perfmode or serverperfmode) changes a number of kernel parameters, reserving a lot more memory for the kernel in order to provide a lot higher limits and thus enable a lot more processes to run, files to be open, and network connections to be handled, among other things. All of the parameters scale with the amount of memory installed, within limits, and nothing changes unless you have at least 16 GiB of memory installed. @klanomath's numbers correspond to having 16 GiB of memory installed.

Here is a brief description from an old support document from Apple about Server 10.6:

  • For each 8GB of installed memory, 2500 processes and 150,000 vnodes are available.
  • The maximum number of threads is set to five times (5x) the number of maximum processes. (This seem no longer to be true)
  • A single user ID (uid) can use up to 75% of the maximum number of processes.
  • A single process can allocate up to 20% of the maximum threads value.

Under performance mode with 48 GiB of memory, I see:

kern.maxvnodes: 900000
kern.maxproc: 15000
kern.maxprocperuid: 11250
kern.num_tasks: 15000
kern.num_taskthreads: 15000
kern.num_threads: 75000
kern.maxfiles: 900000
kern.maxfilesperproc: 450000

kern.ipc.maxsockbuf:8388608
kern.ipc.somaxconn: 2048
kern.ipc.nmbclusters: 131072
kern.ipc.sbmb_cnt_peak: # This parameter is not in my kernel
kern.ipc.njcl: 43688
...
kern.timer.longterm.qlen: 0 # same
kern.timer.longterm.threshold: 0 # same
...
net.inet.ip.maxfragpackets: 4096
...
net.inet.tcp.tcbhashsize: 32768
net.inet.tcp.fastopen_backlog: 600
...
net.inet6.ip6.maxfragpackets: 4096
net.inet6.ip6.maxfrags: 8192

If you really want to dig into it, you can read the actual code. Below is from El Capitan 10.11.6. Server mode is still the same (up to the most recently published code, which is from OS X 10.14 Mojave), but normal mode got a performance bump starting in OS X 10.13 High Sierra if you have at least 12 GiB of memory (changes included in comments in the code).

The scale_seutp function sets up the scale factor as floor(memsize / 8 GiB) if you have Server Performance Mode enabled and at least 16 GiB of memory installed. Otherwise it is zero unless you have at least 3 GiB of memory, in which case it is 2, or, starting with High Sierra, memsize / 4 GiB. (The value of task_max at the beginning of the code snippet is set when the kernel is built, and it is unclear how it gets set by Apple when distributing OS X. It's probably 1024.)

    typeof(task_max) task_max_base = task_max;

    /* Raise limits for servers with >= 16G */
    if ((serverperfmode != 0) && ((uint64_t)sane_size >= (uint64_t)(16 * 1024 * 1024 *1024ULL))) {
        scale = (int)((uint64_t)sane_size / (uint64_t)(8 * 1024 * 1024 *1024ULL));
        /* limit to 128 G */
        if (scale > 16)
            scale = 16;
        task_max_base = 2500;
    } else if ((uint64_t)sane_size >= (uint64_t)(3 * 1024 * 1024 *1024ULL))
        scale = 2;
    /* Starting with OS X 10.13 High Sierra, if more than 8 GiB of memory,
     * scale = sane_size / 4 GiB with max of 16 (64 GiB or more)
     */

    task_max = MAX(task_max, task_max_base * scale);

    if (scale != 0) {
        task_threadmax = task_max;
        thread_max = task_max * 5; 
    }

Side note: Notice that in the above scale_setup is that the scale factor for serverperfmode is the system memory divided by 8 GiB, while for regular mode it is system memory divided by 4 GiB. So a computer with 32 GiB of memory will have twice the scale factor in normal mode as in performance mode, making it even less likely that you will want to use serverperfmode on a machine with a lot of memory.

The scale factor is applied in bsd_scale_setup (only for a 64-bit kernel) or here for High Sierra. This modifies the kernel parameters that are discussed above and are visible via sysctl. Note that if Server Performance Mode is not enabled, the only thing that is scaled is maxproc (532 -> 1064) and maxprocperuid (266 -> 709) until High Sierra, when maxfiles and maxfilesperproc are also bumped if you have at least 12 GiB of memory. That said, the other parameters scaled in serverperfmode are mainly about handling large numbers of network connection requests, something you are unlikely to need unless you are running a real web server with a very high load.

    /* The initial value of maxproc here is 532 */
    if ((scale > 0) && (serverperfmode == 0)) {
        maxproc *= scale;
        maxprocperuid = (maxproc * 2) / 3;
        /* Starting with OS X 10.13 High Sierra, this clause is added
        if (scale > 2) {
            maxfiles *= scale;
            maxfilesperproc = maxfiles/2;
        }
        *** end of High Sierra addition */
    }
    /* Apply server scaling rules */
    if ((scale >  0) && (serverperfmode !=0)) {
        maxproc = 2500 * scale;
        hard_maxproc = maxproc;
        /* no fp usage */
        maxprocperuid = (maxproc*3)/4;
        maxfiles = (150000 * scale);
        maxfilesperproc = maxfiles/2;
        desiredvnodes = maxfiles;
        vnodes_sized = 1;
        tcp_tfo_backlog = 100 * scale;
        if (scale > 4) {
            /* clip somaxconn at 32G level */
            somaxconn = 2048;
            /*
             * For scale > 4 (> 32G), clip
             * tcp_tcbhashsize to 32K
             */
            tcp_tcbhashsize = 32 *1024;

            if (scale > 7) {
                /* clip at 64G level */
                max_cached_sock_count = 165000;
            } else {
                max_cached_sock_count = 60000 + ((scale-1) * 15000);
            }
        } else {
            somaxconn = 512*scale;
            tcp_tcbhashsize = 4*1024*scale;
            max_cached_sock_count = 60000 + ((scale-1) * 15000);
        }
    }

Finally, the scale factor is also applied in bsd_exec_setup. This configures how much kernel memory is reserved for assembling all the data needed to initialize a process. How a process is exec'd is worthy of a full chapter in a book on the Unix kernel so I won't go into it here. The high-level consequence of this setting is that a bigger number takes up more memory, but allows a larger number of processes to be created per second. (Although this code has stayed the same through the present/Mojave, the effect changed with the change in how scale is computed in High Sierra. Recall the details above: in High Sierra and later, scale is roughly (memory / 4 GiB) for normal mode and (memory / 8 GiB) for server mode. So bsd_simul_execs can actually go down when you switch to server mode.)

    switch (scale) {
        case 0:
        case 1:
            bsd_simul_execs = BSD_SIMUL_EXECS;
            break;
        case 2:
        case 3:
            bsd_simul_execs = 65;
            break;
        case 4:
        case 5:
            bsd_simul_execs = 129;
            break;
        case 6:
        case 7:
            bsd_simul_execs = 257;
            break;
        default:
            bsd_simul_execs = 513;
            break;
            
    }
    bsd_pageable_map_size = (bsd_simul_execs * BSD_PAGEABLE_SIZE_PER_EXEC);

For El Capitan through the present/Mojave, BSD_PAGEABLE_SIZE_PER_EXEC = 264 * 1024, so for my 48 GiB Mac the kernel will reserve about 67 MiB of memory just as buffer space for setting up new processes to be spawned. On the one hand, that is a crazy high number, even for a web server. On the other hand, 67 MiB is peanuts compared to the 48 GiB on the machine.

So Server Performance Mode does take up more memory, and makes the system much more likely to suffer if some program goes out of control consuming resources, but greatly increases the capability of the system to handle a lot more background tasks. I think Apple made the right call by not turning it on by default but also making it easy to enable. I am glad that with High Sierra they are now raising limits in normal mode if you have enough memory. I would leave server mode off (and have left it off) on all my computers until I notice them running into issues because I have so many server programs running on it. After all, it does not speed up the system clock, it does not increase disk speed, and it only increases network I/O if you have hundreds of connections. There's a decent chance your firewall/router will have problems keeping up if you get to the point where server mode has a real impact on your network throughput.

On the other hand, if you really have a need to run 2000 processes, server mode is your only option until you get to High Sierra. The good news is that it is easy enough to turn on, try out, and if you don't like it, turn back off.