RedHat Linux: server paging, sum of RES/RSS + buffers + cached < TOTAL. Who is using my memory?

We have a server with 8GB, running mysql, 4 java applications and a small postgres db. We're having some issues with the system paging when running some mysql queries, so in order to see what's going on we ran some commands to see how memory is being used. Now the problem is that we can't find who is using 4 of our 8 GBs. Here is the data we have:

[root@server1 ~]# free -mt
             total       used       free     shared    buffers     cached
Mem:          7983       7934         49          0         65        584
-/+ buffers/cache:       7284        699
Swap:         5951       1032       4919
Total:       13935       8966       4968

So it seems there is a very small usage for buffers and cache, so memory should be mainly used for processes:

Top ordered by %MEM gives ( only processes with %MEM > 0.0 )

top - 16:31:00 up 3 days, 16:26,  3 users,  load average: 0.51, 0.53, 0.56
Tasks: 153 total,   1 running, 151 sleeping,   0 stopped,   1 zombie
Cpu(s):  9.9%us,  0.6%sy,  0.0%ni, 85.3%id,  4.1%wa,  0.0%hi,  0.2%si,  0.0%st
Mem:   8175264k total,  8126480k used,    48784k free,    70312k buffers
Swap:  6094840k total,  1057440k used,  5037400k free,   595348k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  DATA nFLT nDRT COMMAND
 7377 mysql     15   0 2677m 1.8g 4624 S  3.0 22.7 107:28.89 2.6g  95k    0 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/my
17836 root      21   0 2472m 1.0g 9872 S  2.3 13.2  11:18.44 2.4g  403    0 /usr/local/jdk1.6.0_25//bin/java -Djava.util.logging.config.file=/tomcat/conf/logging.properties -Drmi
17969 root      18   0  731m 222m  10m S  3.0  2.8  14:06.67 640m   33    0 /java//bin/java -Xms256m -Xmx256m -Xloggc:/var/log/gestorDeTramas.gc -XX:+PrintGCDetails -Dservice.mod
17980 root      21   0  407m  77m  10m S  0.0  1.0   0:04.35 313m   36    0 /java//bin/java -Xms32m -Xmx64m -Xloggc:/var/log/mensajero.gc -XX:+PrintGCDetails -Dservice.mode=rmi -
19866 postgres  15   0  160m  43m  34m S 11.3  0.5   1:51.57 9856    4    0 postgres: postgres mapas 127.0.0.1(46751) idle
24892 postgres  15   0  160m  41m  32m S  0.0  0.5   1:09.97 9804    3    0 postgres: postgres mapas 127.0.0.1(36534) idle
24891 postgres  15   0  160m  40m  31m S  0.0  0.5   0:50.74 9892    1    0 postgres: postgres mapas 127.0.0.1(36533) idle
24886 postgres  15   0  160m  40m  31m S  4.7  0.5   0:51.35 9936    0    0 postgres: postgres mapas 127.0.0.1(36528) idle
23622 postgres  15   0  160m  40m  31m S  0.0  0.5   0:55.42 9952    1    0 postgres: postgres mapas 127.0.0.1(47826) idle
24887 postgres  15   0  160m  40m  31m S  0.0  0.5   0:44.11 9888    2    0 postgres: postgres mapas 127.0.0.1(36529) idle
24880 postgres  16   0  160m  38m  29m S  4.3  0.5   0:42.49 9920    2    0 postgres: postgres mapas 127.0.0.1(36522) idle
24881 postgres  15   0  160m  29m  20m S 12.6  0.4   0:04.66 9948    0    0 postgres: postgres mapas 127.0.0.1(36523) idle
 4139 root      34  19  256m  11m 1652 S  0.0  0.1   0:11.58  18m  902    0 /usr/bin/python -tt /usr/sbin/yum-updatesd

So if I add the RES column I get around 3.5GB. So, the question is, who is using my physical memory? If I add RES + buffers + cache it's around 4GB and the server has 8GB! What can we do to diagnose the server and see who's using the missing ram?

EDIT: The server is a VMware guest.

Also note the mysql process has a LOT of page faults.

More data:

[root@server1 ~]# cat /proc/meminfo
MemTotal:      8175264 kB
MemFree:         47204 kB
Buffers:         63180 kB
Cached:         611144 kB
SwapCached:     513392 kB
Active:        3458724 kB
Inactive:       538952 kB
HighTotal:           0 kB
HighFree:            0 kB
LowTotal:      8175264 kB
LowFree:         47204 kB
SwapTotal:     6094840 kB
SwapFree:      5037416 kB
Dirty:          342108 kB
Writeback:          44 kB
AnonPages:     3303684 kB
Mapped:          61352 kB
Slab:            79452 kB
PageTables:      19236 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
CommitLimit:  10182472 kB
Committed_AS:  5778312 kB
VmallocTotal: 34359738367 kB
VmallocUsed:    266052 kB
VmallocChunk: 34359471959 kB
HugePages_Total:     0
HugePages_Free:      0
HugePages_Rsvd:      0
Hugepagesize:     2048 kB

Another way to see how much resident memory the two top processes are using:

[root@server1 ~]# cat /proc/7377/smaps | grep ^Rss | awk '{A+=$2} END{print A}'
1852112
[root@server1 ~]#

[root@server1 ~]# cat /proc/17836/smaps | grep ^Rss | awk '{A+=$2} END{print A}'
1081620

Solution 1:

Is the system a vmware guest? If yes, VMware Balloon driver can use some unaccounted memory.

Solution 2:

Thanks to Minto Joseph's answer, I found that the VMware balloon driver ( vmmemctl ) was using the missing memory.

cat /proc/vmmemctl

target:              1000894 pages
current:             1000894 pages
rateNoSleepAlloc:      16384 pages/sec
rateSleepAlloc:         2048 pages/sec
rateFree:              16384 pages/sec

timer:                325664
start:                     3 (   0 failed)
guestType:                 3 (   0 failed)
lock:                3623088 (  29 failed)
unlock:               623698 (   0 failed)
target:               325664 (   2 failed)
primNoSleepAlloc:    3620199 (  11 failed)
primCanSleepAlloc:      2900 (   0 failed)
primFree:            2622165
errAlloc:                 28
errFree:                  28

getconf PAGESIZE
4096

So there you have the 4GBs

It's a pity the vmmemctl doesn't use a standard method to report how much memory it's using, but I think it's because how it's implemented.

The main reference from vmware offers a lot of detail about ballooning. I quote since it's relevant to our original problem ( 'why is this server paging if it has non used memory'? ):

"Typically, the hypervisor inflates the virtual machine balloon when it is under memory pressure. By inflating the balloon, a virtual machine consumes less physical memory on the host, but more physical memory inside the guest. As a result, the hypervisor offloads some of its memory overload to the guest operating system while slightly loading the virtual machine. That is, the hypervisor transfers the memory pressure from the host to the virtual machine. Ballooning induces guest memory pressure. In response, the balloon driver allocates and pins guest physical memory. The guest operating system determines if it needs to page out guest physical memory to satisfy the balloon driver’s allocation requests. If the virtual machine has plenty of free guest physical memory, inflating the balloon will induce no paging and will not impact guest performance. In this case, as illustrated in Figure 6, the balloon driver allocates the free guest physical memory from the guest free list. Hence, guest-level paging is not necessary.

However, if the guest is already under memory pressure, the guest operating system decides which guest physical pages to be paged out to the virtual swap device in order to satisfy the balloon driver’s allocation requests. The genius of ballooning is that it allows the guest operating system to intelligently make the hard decision about which pages to be paged out without the hypervisor’s involvement."

"genius of ballooning" :)