What is the impact of increasing the httpd process priority (negative nice)?

Renicing a process to a negative nice level increases the process's scheduling priority. High priority processes are run before low-priority processes. They are also allowed to run longer before being pre-empted (they get longer time-slices).

On a web server, running httpd processes with a negative nice value should reduce context switching and therefore improve overall performance.

Has anyone tried doing this? What kind of a difference does it make? Is response time improved? Does the response time standard deviation increase or decrease? Is throughput improved?

Edit:

I'm not looking to reduce the priority of other processes; and I'm not looking to fix an overloaded system. I'm wondering if negative nice levels effect reduced context switches, and if so what the impact of this is on response time and throughput.


I'd highly suggest not doing this. If your server is so heavily loaded that apache isn't getting the CPU time it needs, then renicing a process will have a very minimal effect, if any. Additionally, apache will have many different processes running at any one time. You could renice the parent process, and its children will inherit the parent's niceness, but then all of the httpd processes will be fighting each other for time.

In old kernels (say late 90s/early 00s), I found there to be some value in renicing processes (though I usually only decreased priority, not increased). With the scheduler in modern kernels, I've never found it to be necessary or even worthwhile thinking about.

In conclusion, to solve performance problems, there are a plethora of other areas I would pay attention to before I'd start blaming process priority.


Warning: IANAKD (I am not a Kernel Developer :)

Assuming we are talking about Linux, you can read about how nice is implemented at:

http://www.kernel.org/doc/Documentation/scheduler/sched-nice-design.txt

To attempt to answer your question, it won't affect throughput because processes are deprioritized while blocking on I/O, and a Web server serving static content is most likely going to be I/O bound.

See also the "ionice" command-- although it's not useful unless different processes are contending for I/O.

Ultimately I don't think that reducing context switches will make a difference. Most likely a Web server serving static content is spending all its time doing I/O (network or disk) and the CPU is relatively idle. Processors are so much more powerful than I/O subsystems these days that they are rarely the bottleneck in many server scenarios. (Obviously pure number crunching or 3D rendering and such are exceptions to this.) If you're considering implementing this on a real server to improve performance, I would suggest first monitoring the system using tools like top, iostat, vmstat, dstat, etc. to see where it's spending most of its time. Once you have data on the bottleneck, you'll be in a good position to look for solutions to it.