Hadoop Attacks Destroy Data

Hadoop, CouchDB Users Latest Attack Targets

The attacks on databases just keep coming.

First, it was the MongoDB attack, then as Evident.io’s John Martinez wrote last week in Elasticsearch Now In the Crosshairs – MongoDB Ransom Attackers Have New Targets, the Elasticsearch search and analytics engine came under assault. Now, most recently, poorly configured Hadoop and CouchDB databases were the targets of similar vicious attacks.

This time, at least for the Hadoop attacks, instead of attempting to extract a ransom from users, the attackers are simply infiltrating the targets and deleting whatever data they can. If that’s not a wake-up call for maintaining a good security posture, then I don’t know what would possibly do the job.

In this blog post, Fidelis Threat Research Team pegged the potential number of exposed Hadoop installations ranging from 8,000-10,000 HDFS installations worldwide. “A core issue is similar to MongoDB, namely the default configuration can allow “access without authentication.” This means an attacker with basic proficiency in HDFS can start deleting files,” they wrote.

It’s interesting that the Hadoop attackers did start to destroy data, unlike each of the other attacks which involved a ransom note demanding payment. And that’s exactly the pattern the CouchDB attacks followed.

When these attacks hit, they scale rapidly. For instance, according to accounts, the MongoDB attacks spiked from 12,000 to more than 27,000 in a day. And if you don’t want to get a message like the one that MongoDB users received, you need to continuously keep track of your configuration settings:

“Your database has been pwned because it is publically accessible at port 27017 with no authentication (wtf were you thinking?). Your data has been dumped (with data types preserved), and is easily restoreable [sic].

“To get your data back, email the supplied email after sending 0.15BTC to the supplied Bitcoin wallet, do this quickly as after 72 hours your data will be erased (if an email is not sent by then). We will get back to you within 2 days. All of your data will be restored to you upon payment.”

Access policies often have a big role in attacks of this nature. When it came to attacks on users of AWS Elasticsearch, in his post Martinez noted the following on securing resource-based policies:

AWS recommends that you don’t use an open access policy on your Elasticsearch domain, except for when testing with non-production data. We would go as far as to say that testing with an open access policy shouldn’t ever be practiced period. Our experience shows that development and pre-production environments are ripe for exploitation due to the lower security hygiene and less/lack of monitoring placed on them. What’s even worse is we sometimes think it’s easy to test in pre-production with real customer data (please DO NOT do that! or if you must, always make sure you anonymize).

If you have been fortunate enough not to have been victimized by any of these attacks, that’s great news: but now is a good time to check the security settings of your servers, workloads and cloud systems. Because attacks like this on cloud-based systems are quickly becoming the new normal.

About George Hulme

George V. Hulme is an internationally recognized information security and business technology writer. For more than 20 years Hulme has written about business, technology, and IT security topics. For five years, Hulme served as senior editor at InformationWeek magazine, he covered the IT security and homeland security beats. His work has appeared in CSOOnline, ComputerWorld, Network Computing, Government Computer News, Network World, San Francisco Examiner, TechWeb, VARBusiness, and dozens of other technology publications.

More posts by George

Tags: , ,