Server crashes Sundays at 6 a.m. - out of memory
I have a weird problem: Every Sunday at 6 am my LAMP server crashes.
Looking at the logs I see about 500 apache2 processes at the time (this is a test server without any load - especially not at 6 am)
The syslog states the following:
May 19 06:00:11 myserver kernel: [313742.304291] Out of memory: Kill process 912 (mysqld) score 31 or sacrifice child
May 19 06:00:11 myserver kernel: [313742.304311] Killed process 912 (mysqld) total-vm:816528kB, anon-rss:6240kB, file-rss:0kB
It seems that the server is running out of memory thus killing some processes.
What could be the problem? Does it possibly have something to do with the weekly crontab?
Here are some more lines from the syslog:
May 19 06:00:11 myserver kernel: [313742.290517] oom_kill_process: 3 callbacks suppressed
May 19 06:00:11 myserver kernel: [313742.290526] apache2 invoked oom-killer: gfp_mask=0x280da, order=0, oom_adj=0, oom_score_adj=0
May 19 06:00:11 myserver kernel: [313742.290534] apache2 cpuset=/ mems_allowed=0
May 19 06:00:11 myserver kernel: [313742.290541] Pid: 1884, comm: apache2 Not tainted 3.2.0-29-generic #46-Ubuntu
May 19 06:00:11 myserver kernel: [313742.290546] Call Trace:
May 19 06:00:11 myserver kernel: [313742.290561] [<ffffffff810bf9ad>] ? cpuset_print_task_mems_allowed+0x9d/0xb0
May 19 06:00:11 myserver kernel: [313742.290570] [<ffffffff8111a7e1>] dump_header+0x91/0xe0
May 19 06:00:11 myserver kernel: [313742.290577] [<ffffffff8111ab65>] oom_kill_process+0x85/0xb0
May 19 06:00:11 myserver kernel: [313742.290584] [<ffffffff8111af0a>] out_of_memory+0xfa/0x220
May 19 06:00:11 myserver kernel: [313742.290592] [<ffffffff8112098f>] __alloc_pages_nodemask+0x80f/0x820
May 19 06:00:11 myserver kernel: [313742.290603] [<ffffffff8115937a>] alloc_pages_vma+0x9a/0x150
May 19 06:00:11 myserver kernel: [313742.290611] [<ffffffff811399cc>] do_anonymous_page.isra.38+0x7c/0x2f0
May 19 06:00:11 myserver kernel: [313742.290618] [<ffffffff8113d3f1>] handle_pte_fault+0x1e1/0x200
May 19 06:00:11 myserver kernel: [313742.290625] [<ffffffff8113d7c8>] handle_mm_fault+0x1f8/0x350
May 19 06:00:11 myserver kernel: [313742.290634] [<ffffffff8165d3e0>] do_page_fault+0x150/0x520
May 19 06:00:11 myserver kernel: [313742.290642] [<ffffffff81177d1d>] ? vfs_read+0x10d/0x180
May 19 06:00:11 myserver kernel: [313742.290649] [<ffffffff8165a035>] page_fault+0x25/0x30
May 19 06:00:11 myserver kernel: [313742.290653] Mem-Info:
May 19 06:00:11 myserver kernel: [313742.290657] Node 0 DMA per-cpu:
May 19 06:00:11 myserver kernel: [313742.290663] CPU 0: hi: 0, btch: 1 usd: 0
May 19 06:00:11 myserver kernel: [313742.290666] Node 0 DMA32 per-cpu:
May 19 06:00:11 myserver kernel: [313742.290672] CPU 0: hi: 186, btch: 31 usd: 124
May 19 06:00:11 myserver kernel: [313742.290682] active_anon:73974 inactive_anon:73976 isolated_anon:0
May 19 06:00:11 myserver kernel: [313742.290684] active_file:305 inactive_file:3393 isolated_file:0
May 19 06:00:11 myserver kernel: [313742.290687] unevictable:0 dirty:11 writeback:4 unstable:0
May 19 06:00:11 myserver kernel: [313742.290689] free:12251 slab_reclaimable:2341 slab_unreclaimable:19263
May 19 06:00:11 myserver kernel: [313742.290692] mapped:1006 shmem:37 pagetables:59627 bounce:0
May 19 06:00:11 myserver kernel: [313742.290697] Node 0 DMA free:4652kB min:684kB low:852kB high:1024kB active_anon:4380kB inactive_anon:4380kB active_file:0kB inactive_file:36kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15656kB mlocked:0kB dirty:0kB writeback:8kB mapped:0kB shmem:0kB slab_reclaimable:200kB slab_unreclaimable:212kB kernel_stack:0kB pagetables:2024kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:12 all_unreclaimable? yes
May 19 06:00:11 myserver kernel: [313742.290720] lowmem_reserve[]: 0 991 991 991
May 19 06:00:11 myserver kernel: [313742.290728] Node 0 DMA32 free:44352kB min:44368kB low:55460kB high:66552kB active_anon:291516kB inactive_anon:291524kB active_file:1220kB inactive_file:13536kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1014992kB mlocked:0kB dirty:44kB writeback:8kB mapped:4024kB shmem:148kB slab_reclaimable:9164kB slab_unreclaimable:76840kB kernel_stack:5112kB pagetables:236484kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:5925 all_unreclaimable? yes
May 19 06:00:11 myserver kernel: [313742.290752] lowmem_reserve[]: 0 0 0 0
May 19 06:00:11 myserver kernel: [313742.290759] Node 0 DMA: 11*4kB 18*8kB 39*16kB 46*32kB 3*64kB 1*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 4652kB
May 19 06:00:11 myserver kernel: [313742.290778] Node 0 DMA32: 54*4kB 119*8kB 121*16kB 79*32kB 49*64kB 26*128kB 12*256kB 7*512kB 3*1024kB 1*2048kB 5*4096kB = 44352kB
May 19 06:00:11 myserver kernel: [313742.290797] 10895 total pagecache pages
May 19 06:00:11 myserver kernel: [313742.290801] 7149 pages in swap cache
May 19 06:00:11 myserver kernel: [313742.290805] Swap cache stats: add 1460822, delete 1453673, find 653694/726620
May 19 06:00:11 myserver kernel: [313742.290809] Free swap = 0kB
May 19 06:00:11 myserver kernel: [313742.290812] Total swap = 2097084kB
May 19 06:00:11 myserver kernel: [313742.299856] 261856 pages RAM
May 19 06:00:11 myserver kernel: [313742.299860] 7335 pages reserved
May 19 06:00:11 myserver kernel: [313742.299863] 291314 pages shared
May 19 06:00:11 myserver kernel: [313742.299866] 239474 pages non-shared
Solution 1:
The problem seems to be caused by a badly tuned apache server. You should never let apache resources grow more than your memory or CPU.
This reference is really interesting, might be worth take a look: http://drupal.org/node/215516
Solution 2:
One of my vSservers had a exactly a same problem.
It was hard to determin a exact reason for the crash caused by running of of the memory, but timing pointed to the crontab.weekly.
After a studying contents of the crontab.weekly from different servers, I found out that the problem server had a one script more that working servers did not have:
apt-xapian-index
It seemed that "maintenance tools for a Xapian index of Debian package" has caused a lot of problems on smaller servers and computers where it has been on use. After a doing some googling, I decided to remove the script from crontab.weekly and now the problem seems to be gone.
I suggest you trying to remove that and any other heavy-weight scripts from crontab.weekly to see if that helps with your problem :)
Solution 3:
The problem was that the settings for Apache contained MinSpareServers = 500
which led to a server load >>10 the whole time on a single core test server.