Elastic Search indexes gets deleted frequently [closed]

I'm running an elastic search for a personal project on google-cloud and I use as a search index for my application. From the last 3 days, indexes are getting deleted mysteriously. I have no clue why, I looked at all my code for any delete index calls, also looked at logs. Still not able to figure it out. Any thoughts? How can I debug this?

[2020-07-24T00:00:27,451][INFO ][o.e.c.m.MetaDataDeleteIndexService] [node-1] [users_index_2/veGpdqbNQA2ZcnrrlGIA_Q] deleting index
[2020-07-24T00:00:27,766][INFO ][o.e.c.m.MetaDataDeleteIndexService] [node-1] [blobs_index_2/SiikUAE7Rb6gS3_UeIwElQ] deleting index
[2020-07-24T00:00:28,179][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [gk01juo8o3-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:28,776][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [28ds9nyf8x-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:29,328][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [hw2ktibxpl-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:29,929][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [va0pzk1hfi-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:30,461][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [ruwhw3jcx0-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:30,973][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [wx4gylb2jv-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:31,481][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [hbbmszdteo-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:31,993][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [1gi0x5277l-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:32,494][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [sotglodbi9-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:33,012][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [khvzsxctwr-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:33,550][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [hgrhythm3g-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:34,174][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [ejyucop7ag-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:34,715][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [n1bgkmqp8r-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:35,241][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [vsw49c4kpp-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:35,747][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [qrb5x89icr-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:36,261][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [pv8n84itx6-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:36,856][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [wnnwmylxvs-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:37,392][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [g5tw6w2tqb-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:37,889][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [u7tobv31o2-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:38,474][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [ufvizrnmez-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:38,946][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [0i9wszne7l-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T01:30:00,001][INFO ][o.e.x.m.MlDailyMaintenanceService] [node-1] triggering scheduled [ML] maintenance tasks
[2020-07-24T01:30:00,002][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [node-1] Deleting expired data
[2020-07-24T01:30:00,010][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [node-1] Completed deletion of expired ML data
[2020-07-24T01:30:00,011][INFO ][o.e.x.m.MlDailyMaintenanceService] [node-1] Successfully completed [ML] maintenance tasks
[2020-07-24T01:30:00,039][INFO ][o.e.x.s.SnapshotRetentionTask] [node-1] starting SLM retention snapshot cleanup task
[2020-07-24T01:37:43,817][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [.kibana] creating index, cause [auto(bulk api)], templates [], shards [1]/[1], mappings []

Solution 1:

It looks like you are getting hit by a meow attack.

Hundreds of unsecured databases exposed on the public web are the target of an automated 'meow' attack that destroys data without any explanation.

The activity started recently by hitting Elasticsearch and MongoDB instances without leaving any explanation, or even a ransom note. Attacks then expanded to other database types and to file systems open on the web.

From this tweet, you can see that you are experiencing the same behavior seen by these attacks:

From the logs in MongoDB you can see it drops databases first then create new ones with $randomstring-meow

Please ensure that you are not using a default username and password for your DB and that your configuration is set up to avoid public-facing interactions. If you need to give access to your DB, use an API with key based auth, and only the bare minimum capabilities allowed.

Edit #1: You can obvserve the attacked databases here on shodan.io.

Edit #2: Some more advice for protecting from this (and other) attacks (from HackerNews user contrarianmop):

Also as a rule of thumb never ever expose anything but port 80 and 443 if hosting a webapp.

If you must expose services other than http/s then be sure to not leak its version, have it secured properly and always up to date. The user running such services should also be a non privileged user, the daemon chrooted, and the OS should have appropriate process and filesystem permissions in place.

Edit #3: An interesting theory as to why the attacker used the term "meow" is because cats like to drop (or knock) items from tables.

Solution 2:

As answered by some people here, your cluster has been attacked by meow.

Since 6.8, security is available for free within the default distribution of elasticsearch. So the ability to protect from meow is free. Have a look at this blog post to see how to prevent an Elasticsearch server breach.

Update: Elastic also released a new blog post covering this specific Meow attack.