Unable to change vm.max_map_count for elasticsearch
Prehistory
I have elasticsearch and SugarCRM7 running on CentOS 6.5. Every day I face the same problem: java outOfMemory error. That happens because of small vm.max_map_count value, 65530 only when 262144 is recommended.
Problem
The problem is that vm.max_map_count seems unchangeable:
-
Changing under root
sudo sysctl -w vm.max_map_count=262144
returns
error: permission denied on key 'vm.max_map_count'
While
ps aux | grep java
Returns only the grep process
-
Changing on elasticsearch startup
sudo service elasticsearch start
Returns error, too
error: permission denied on key 'vm.max_map_count'
Starting elasticsearch: [ OK ]
-
Manual changes via file (dirty-dirty hack):
sudo vi /proc/sys/vm/max_map_count
Does not work either:
"/proc/sys/vm/max_map_count" [readonly] 1L, 6C
-- INSERT -- W10: Warning: Changing a readonly file
E45: 'readonly' option is set (add ! to override)
"/proc/sys/vm/max_map_count" E212: Can't open file for writing
While
ls -la /proc/sys/vm/ | grep max_map_count
Returns
-rw-r--r-- 1 root root 0 Apr 10 09:36 max_map_count
(But I guess this can be normal for linux talking about /proc directory)
So how can I change this variable's value? Restarting elasticsearch every night is not a good idea... Or at least may be someone knows why this error happens?
Solution 1:
you are almost there, It does not matter if is a virtual machine or a physical machine, those settings are always changeable.
I'll show 3 methods.
Some pre-information:
1) It's better to execute as root, if possible.
2) /proc on unix is not a real filesystem, it's a in-memory kernel file system, but it appears to be like a normal disk file system. You can call it 'fake filesystem' or 'special filesystem', you cannot edit those fake-files with vi or any other editor, because they are not files, they just look like files. I stuck with the same problem years ago.
But it's simple to change their values, just require another kind of 'mechanics' to edit them.
I'll will explain: First, need to be root: (sudo does work in some distros, but does not on some other distros like you tried, this first method is universal and works on any Linux, macOS, or any Unix-based. Hope you have access to root password.
Proceed at prompt:
$ su root
Enter root password.
Now you are root, let's check the current value of: /proc/sys/vm/max_map_count
$ cat /proc/sys/vm/max_map_count
65536
Let's change it:
echo 262144 > /proc/sys/vm/max_map_count
Let's verify:
cat /proc/sys/vm/max_map_count
262144
It's done! And it's already applied and functional. By changing values of any pseudo-file under /proc the settings becomes active instantly. But they don't persist after a reboot. You can play with values and measure performance changes at elasticsearh or any other application or system metrics. Go tunning your system, writing the values on some paper, keep the best values. On any mistake, reboot and they will all be back to original values, and start again until all wished values are optimal. There's a lot of disk and memory tunnable parameters under /proc. And they make a huge difference and performance gain if you to tune them well (and have time for it). You are on the right way.
When satisfied, let's make them permanent:
First method:
using /etc/rc.local
vi /etc/rc.local
put all parameters inside rc.local file, example:
echo 220000000 > /proc/sys/vm/dirty_background_bytes
echo 320000000 > /proc/sys/vm/dirty_bytes
echo 0 > /proc/sys/vm/dirty_background_ratio
echo 0 > /proc/sys/vm/dirty_ratio
echo 500 > /proc/sys/vm/dirty_writeback_centisecs
echo 4500 > /proc/sys/vm/dirty_expire_centisecs
echo 1 > /proc/sys/net/ipv4/tcp_rfc1337
echo 10 > /proc/sys/vm/swappiness
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo 120 > /proc/sys/net/ipv4/tcp_keepalive_time
echo 0 > /proc/sys/vm/zone_reclaim_mode
echo deadline > /sys/block/sda/queue/scheduler
echo 8 > /sys/class/block/sda/queue/read_ahead_kb
echo 1048575 > /proc/sys/vm/max_map_count
quit vi editor saving the file.
Those parameters will be set on every reboot, AFTER all init services have started, just before the login prompt shows.
(/etc/rc.local file is executed after all startup linux services, it may not work if elasticsearch starts before it as a service, but this method can be useful on another setup if you need in future, or you can use like this by putting them inside your elasticsearch init script, because init script run as root, so it's the same syntax above to use inside init scripts)
You can also copy them now and paste them for instant changes. The parameters above are valid, tuned and running on my apache cassandra server. If you wish, try them as a start point to tune yours.
Second method to make them permanent:
Parameters now will be set BEFORE any startup service on linux.
Edit /etc/sysctl.conf, put parameters inside
vm.max_map_count=1048575
vm.zone_reclaim_mode=0
vm.dirty_background_bytes=220000000
vm.dirty_background_ratio=0
vm.dirty_bytes=320000000
vm.dirty_ratio=0
vm.swappiness=10
keep going with the others, save /etc/sysctl.conf, reboot your server to apply changes, or execute: sysctl -p to apply the changes without reboot. They will be permanent across reboots.
Two methods above are the most commom. There's another one, and it may work for you, it's by using sudo, almost like you were doing:
instead of:
sudo sysctl -w vm.max_map_count=262144
try:
echo 262144 | sudo tee /proc/sys/vm/max_map_count
It works on ubuntu.
Verify:
user@naos:~$ cat /proc/sys/vm/max_map_count
262144
Hope I have helped someway, at least by giving the 3 different options to deal with the problem, since it's almost a year old your question ;)
Regards, Rafael Prado
Solution 2:
I think your "virtual machine" is actually an OpenVZ container (which you can verify this by running virt-what
).
In this case, you can't change vm.max_map_count
sysctl or many others. The values are fixed.
This is a well known issue with elasticsearch (issue #4978). It's not just Elasticsearch. Java apps are well known to perform poorly on various OpenVZ providers, mainly because the hosts are often poorly tuned and there's nothing you can do about it. One commenter on that issue echoed what would be my recommendation exactly:
joshuajonah commented on Oct 20, 2015
This is insane. I guess I'm going to change over to a KVM VPS.