Flush-0:n processes causing massive bottleneck

Solution 1:

Your system is being overloaded with disk writing requests and your configuration "dirty ratio" is not optimal for your environment.

You can set two administrative parameters for virtual memory:

These are the dirty_background_ratio and dirty_ratio locatable in /proc/sys/vm/

These parameters represent a percentage of memory.

If you setting a low value for dirty_ratio You can get more disk load but would reduce the consumption of RAM for dirty memory management.

The dirty_background_ratio is the percentage minimal residual memory, which caused the stoppage of writing dirty data in the disk from the system.   This means that you must find the best compromise between the dirty chunks dimension to write (flush process) and minimum memory where the system will be stop in the writing process.

Relationship for good performance could be:

dirty_ratio 90%
dirty_background_ratio 5%

an average ratio:

dirty_ratio 40~50%
dirty_background_ratio 10~20%

The causes of this imbalance in your system can be several, among the most common causes is an insufficient amount of RAM to manage the installed other times it may simply be due to a drop in performance of memory installed on your server with causes ranging from poor ventilation to incorrect feeding.

Although most of the problems are in the form of software bugs, not known to many of these errors are due to poor confuguracion of the hardware in relation to the services installed. Especially in the case of rented machines.


To help those less familiar with Linux machines, the above mentioned parameters can be replaced in this way:

Permanent mode:
(run these two commands only once, otherwise edit this file with your favorite editor)

# echo "vm.dirty_ratio = 40" >> /etc/sysctl.conf
# echo "vm.dirty_background_ratio = 10" >> /etc/sysctl.conf

Temporally mode:

# echo "40" > /proc/sys/vm/dirty_ratio
# echo "10" > /proc/sys/vm/dirty_background_ratio

You can find more information about these settings at this link

Solution 2:

I found following link with similar discussion:

0005972: Top and uptime displays wrong load average value - CentOS Bug Tracker

at last post it says:

The high load average issue is resolved in a newer version of the hpvsa driver (1.2.4-7) that is now released by HP. Contact HP Support to obtain a copy of the new driver.