Why Swap is used when plenty of free memory is left?
I have pretty good web (dedicated) server with good memory resources:
System information
Server load 2.19 (8 CPUs)
Memory Used 29.53% (4,804,144 of 16,267,652)
Swap Used 10.52% (220,612 of 2,097,136)
As you can see, my server is using swap when there is plenty of free memory available.
Is this normal or is there something wrong with the configuration or the coding ?
N.B.
My MySQL process is using over 160% of the CPU power for some reason; I don't know why, but I don't have more than 70 simultaneous users ...
This is perfectly normal.
At system startup, a number of services start. These services initialize themselves, read in configuration files, create data structures and so on. They use some memory. Many of these services will never run again for the entire time the system is up because you're not using them. Some of them may run in hours, days, or weeks. Yet all this data is in physical memory.
Of course, the system can't throw this data away. It can't prove that it will literally never be accessed. One of those services, for example, might be the one that provides you remote access to the box. You may not have used it in a week, but if you do use it, it had better work.
But the system knows that it might like to use that physical memory for things like a disk cache or in other ways that will improve performance. So it does opportunistic swapping. When it has nothing better to do, it writes data that hasn't been used in a very long time to disk, using swap space. However, it still keeps the pages in physical memory. So they can still be accessed without having to swap them in.
Now, if the system later needs that physical memory for something else, it can simply throw those pages away because it has already written them to swap. This gives the system the best of both worlds. The data is still kept in memory, so it can be accessed without having to read it from disk. But if the system needs that memory for another purpose, it won't have to write it out first. Big win all around.
This can happen if at some time in the past you needed more memory than you have physical RAM in the machine. At that time some data will have been written to the swap space.
When later memory gets freed, data from swap is not automatically read back into RAM: this only happens when the data in swap is actually needed by some process. This is perfectly normal.
As for your mysql process: this all depends on the type of queries you run. In theory 2 very complex queries could probably suffice to get such a load, regardless of your number of users. You could enable the slow query log to gain more insight into which queries are load-intensive.
You can also change this behaviour by sysctl -w vm.swappiness=10
, which will greatly reduce the use of swap until it is actually needed.
As for MySQL, have you at least performed a baseline configuration test using the tuning-primer.sh script?
This is probably, as David explained, a normal behavior of the Linux Kernel, but it can also be an occurrence of the MySQL “swap insanity” problem. In your case (8 CPU, 16 GB RAM total, 5 GB used), for that to happen, your computer should be a NUMA system with 4 nodes (sockets) and 4 GB of RAM per node and a MySQL InnoDB buffer pool of 4 GB.
In short (you should read the link above for complete details), this is what happen:
- When your system start, processes are spread on all the NUMA nodes using some of their memory.
- When MySQL starts, it allocates 4 GB for the InnoDB buffer pool, filling the RAM of a NUMA node and using some RAM on other nodes.
- Then, the Linux kernel, which cannot move allocated RAM from one NUMA node to another, thinks it is good idea to swap out pages from the starved node (or need to swap out pages because pages needed to be swapped in).
To avoid that, change the memory allocation for MySQL to allocate RAM on all cores (see the above link for more details).