How is Windows Server optimized differently than Windows desktop? [closed]

All the answers I see mostly just say "server is optimized to be a server and desktop is optimized to be a desktop" with no technical details explaining how and where these optimizations are applied.

They should be running the same kernel right? So if we exclude software running on top of the OS (obviously the whole enterprise software stack only runs on server) what teaks and optimizations separate the two OSs?

This question was asked in more broad terms here. The accepted answer pointed to these differences between the two OSs: amount of supported memory and processors, supported software and services, supported connections (though this can be modified), and "the server OS is configured to give priority to background apps/services and the client OS is configured to give priority to foreground apps".

I have not been able to find any docs that explain how Server prioritizes background services or if there are any other tweaks to things like the networking stack or other low level components of the OS.

Is there any documentation that describes any optimizations/kernel tweaks in specific technical terms?


Solution 1:

I am not aware of a white paper that details the differences. However, you can change the behavior of the server and desktop OS. By default the desktop gives priority to foreground apps and the server gives priority to background apps. This is configurable. If you run an RDP server / terminal server you often want the full desktop experience. You need to install that on server. Also in the control panel under "system and security- system" choose advanced system settings. Then under performance click on settings. This is the adjustment area for both visual effects and for processor scheduling.