Elastic Search indexes gets deleted frequently

Elasticsearch

Elasticsearch Problem Overview


I'm running an elastic search for a personal project on google-cloud and I use as a search index for my application. From the last 3 days, indexes are getting deleted mysteriously. I have no clue why, I looked at all my code for any delete index calls, also looked at logs. Still not able to figure it out. Any thoughts? How can I debug this?

[2020-07-24T00:00:27,451][INFO ][o.e.c.m.MetaDataDeleteIndexService] [node-1] [users_index_2/veGpdqbNQA2ZcnrrlGIA_Q] deleting index
[2020-07-24T00:00:27,766][INFO ][o.e.c.m.MetaDataDeleteIndexService] [node-1] [blobs_index_2/SiikUAE7Rb6gS3_UeIwElQ] deleting index
[2020-07-24T00:00:28,179][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [gk01juo8o3-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:28,776][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [28ds9nyf8x-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:29,328][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [hw2ktibxpl-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:29,929][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [va0pzk1hfi-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:30,461][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [ruwhw3jcx0-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:30,973][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [wx4gylb2jv-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:31,481][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [hbbmszdteo-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:31,993][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [1gi0x5277l-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:32,494][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [sotglodbi9-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:33,012][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [khvzsxctwr-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:33,550][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [hgrhythm3g-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:34,174][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [ejyucop7ag-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:34,715][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [n1bgkmqp8r-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:35,241][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [vsw49c4kpp-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:35,747][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [qrb5x89icr-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:36,261][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [pv8n84itx6-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:36,856][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [wnnwmylxvs-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:37,392][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [g5tw6w2tqb-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:37,889][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [u7tobv31o2-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:38,474][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [ufvizrnmez-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T00:00:38,946][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [0i9wszne7l-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2020-07-24T01:30:00,001][INFO ][o.e.x.m.MlDailyMaintenanceService] [node-1] triggering scheduled [ML] maintenance tasks
[2020-07-24T01:30:00,002][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [node-1] Deleting expired data
[2020-07-24T01:30:00,010][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [node-1] Completed deletion of expired ML data
[2020-07-24T01:30:00,011][INFO ][o.e.x.m.MlDailyMaintenanceService] [node-1] Successfully completed [ML] maintenance tasks
[2020-07-24T01:30:00,039][INFO ][o.e.x.s.SnapshotRetentionTask] [node-1] starting SLM retention snapshot cleanup task
[2020-07-24T01:37:43,817][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [.kibana] creating index, cause [auto(bulk api)], templates [], shards [1]/[1], mappings []

Elasticsearch Solutions


Solution 1 - Elasticsearch

It looks like you are getting hit by a meow attack.

> Hundreds of unsecured databases exposed on the public web are the target of an automated 'meow' attack that destroys data without any explanation. > > The activity started recently by hitting Elasticsearch and MongoDB instances without leaving any explanation, or even a ransom note. Attacks then expanded to other database types and to file systems open on the web.

From this tweet, you can see that you are experiencing the same behavior seen by these attacks:

> From the logs in MongoDB you can see it drops databases first then create new ones with $randomstring-meow

Please ensure that you are not using a default username and password for your DB and that your configuration is set up to avoid public-facing interactions. If you need to give access to your DB, use an API with key based auth, and only the bare minimum capabilities allowed.

Edit #1: You can obvserve the attacked databases here on shodan.io.

Edit #2: Some more advice for protecting from this (and other) attacks (from HackerNews user contrarianmop):

> Also as a rule of thumb never ever expose anything but port 80 and 443 if hosting a webapp. > > If you must expose services other than http/s then be sure to not leak its version, have it secured properly and always up to date. The user running such services should also be a non privileged user, the daemon chrooted, and the OS should have appropriate process and filesystem permissions in place.

Edit #3: An interesting theory as to why the attacker used the term "meow" is because cats like to drop (or knock) items from tables.

Solution 2 - Elasticsearch

As answered by some people here, your cluster has been attacked by meow.

Since 6.8, security is available for free within the default distribution of elasticsearch. So the ability to protect from meow is free. Have a look at this blog post to see how to prevent an Elasticsearch server breach.

Update: Elastic also released a new blog post covering this specific Meow attack.

Solution 3 - Elasticsearch

You've been meowed:

https://www.bleepingcomputer.com/news/security/new-meow-attack-has-deleted-almost-4-000-unsecured-databases/

Double check all configurations (firewall, elastic search etc.) to ensure the instance is not publicly accessible and access to elastic search is secured (it must not have been before hand).

Solution 4 - Elasticsearch

Note: I just got your question featured on Hacker News, it's about to get a lot of visits and advice.

The elastic database is getting deleted by meow, a new operation scanning the internet for open databases and deleting them. See New "meow" attack has deleted almost 4,000 unsecured databases

There are two problems here:

  1. The database is fully exposed to the internet.

  2. The database is not protected by any form of authentication or access control.

What you need to do:

  1. Disconnect the database from the internet. If it's running on Google Cloud the instance shouldn't even have a public address, databases should be in an internal network (VPC).

  2. Restrict access to the database.

That last bit is unfortunately easier said that done. ElasticSearch doesn't support any form of access control in the free edition, assuming you have the free edition.

What you can do to restrict access is to firewall the instance tightly. This is easy enough to do with the firewall capabilities in Google/AWS/Azure. ElasticSearch typically listens to port 9200 for clients and 9300 or 9350 for replication. The only clients that need access are typically kibana to view logs and logstash/fluentd to ingest logs, that's only a couple of IP to allow traffic from.

If you are working in an enterprise, all production databases must be protected by authentication to satisfy a variety of regulations, so you have to pay up for an enterprise license and configure password or certificate authentication.

Solution 5 - Elasticsearch

Update: As mentioned in various answers and forums, indices were deleted due to meow attack and please follow Elasticsearch's official blog on how to freely secure your ES cluster from these attacks.

Old answer:

Try to see if by accident you have configured index life cycle management policy also if you are not creating the date based indices in your application.

Just had a look at your logs and these are clearly indicating that your ES is deleting the indices, Please see below lines from log which explains this.

> [2020-07-24T00:00:27,451][INFO ][o.e.c.m.MetaDataDeleteIndexService] > [node-1] [users_index_2/veGpdqbNQA2ZcnrrlGIA_Q] deleting index > [2020-07-24T00:00:27,766][INFO ][o.e.c.m.MetaDataDeleteIndexService] > [node-1] [blobs_index_2/SiikUAE7Rb6gS3_UeIwElQ] deleting index

Solution 6 - Elasticsearch

If your ES instance was running on the free edition (and a version prior to 6.8), it likely got hit by a "meow attack" as ES free edition doesn't support any kind of access control in the free edition.

If it wasn't running on the free edition, or was locked behind a VPC of some sort, or wasn't exposed via 80 or 443, and it still dropped, then there are bigger issues.

Categories

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionPramod ShashidharaView Question on Stackoverflow
Solution 1 - ElasticsearchShane FontaineView Answer on Stackoverflow
Solution 2 - ElasticsearchdadoonetView Answer on Stackoverflow
Solution 3 - ElasticsearchdijksterhuisView Answer on Stackoverflow
Solution 4 - Elasticsearchuser5994461View Answer on Stackoverflow
Solution 5 - Elasticsearchuser11935734View Answer on Stackoverflow
Solution 6 - ElasticsearchJosh BrodyView Answer on Stackoverflow