Press "Enter" to skip to content

The Internet Is Scary

If you’re reading this, there’s a chance that you have been to my blog before. I only ever posted one article: a write-up on a DIY whole-home power monitor I made. The reason it isn’t here anymore is because I made a mistake.

You see, I host this site in the cloud. I didn’t want to have it running inside my home network since I didn’t want to run into potential bandwidth issues with the OBVIOUSLY huge amount of traffic the site would see. Also security reasons… yeah. Anyway, it runs inside of a docker container which requires a second container running mariadb. Access to the site goes through a third container running nginx.

When setting up the virtual machine to host all of this, I decided to use the Ubuntu UFW to manage what ports could and couldn’t communicate with the outside world. For this, just ports 80 and 443 are required for the website. However, I quickly noticed that even with a default DENY rule, I could still access all of the ports exposed by the containers. After some research I learned that, by default, UFW policies don’t affect docker containers. There are a few ways to make it work, and the one I settled on was to just set the “iptables” setting in the docker/daemon.json file to false. This has the side effect of disabling some of the automatic networking that the containers can do, notably the one that allows them to reference each other by hostname. I just used the docker-assigned IP addresses instead of hostnames and moved on. Everything now working as I wanted it to.

Fast forward to last week, over a year later I think. I connected to the VM to do some long-overdue maintenance and, while I was at it, decided to set up Portainer to make managing the containers a little more convenient. I figured it would be nice to set up the wordpress+mariadb container duo as a Stack , which is Portainer’s docker-compose implementation. It worked well enough to start the containers, but for some reason the wordpress instance wouldn’t see the database. It turned out the IP addresses had changed since moving them. Since I didn’t want that to happen again I tried using the hostnames (I had forgotten about the iptables setting), but it wasn’t working. I finally found the iptables setting, but I still didn’t remember that I had disabled it for a VERY important reason. I thought something along the lines of “Why is that disabled? Weird.”

Re-enabling it obviously got the containers communicating with each other so I patted myself on the back for a job well done and moved on to other things. This was a mistake. What I had done was open up any exposed ports on my containers to the entire internet.

The next day, when I attempted to go to this site, I was greeted with another database communication type error. “Great, that setting hadn’t actually fixed it” I though. This time it was a bit different, though. It said it was able to connect to the host with the provided credentials, but couldn’t find the database. Uh-oh….

I connected to the vm host and navigated to the mariadb data folder. Inside I found something that no IT admin wants to find…. A “recover your files” readme…

Looking through the logs, it seems that in less than 2 hours from the time I “fixed” the communication problem with my containers the night before, thereby opening up the ports to the internet, someone (or someTHING as I suspect) had connected to the database, taken the data, and wiped it, leaving a ransom note behind. To add insult to injury, on top of my having exposed the standard mariadb port to the internet, the mariadb instance itself still had the default database credentials set. ROOT, with no password… *facepalm*.

I’m still not sure why I didn’t use the cloud hosting provider’s included firewall to manage exposed ports instead of doing what I had done, since it would have been easier and more secure to begin with. Oh well, hindsight and all that. Speaking of… I also wasn’t making backups of the site. *double facepalm*

I learned a lesson that I shouldn’t have needed to, given how long I’ve been doing IT work. ALWAYS check to make sure firewall policies are behaving the way you expect. ALWAYS make backups of important data. ALWAYS change default credentials.

I was lucky in this instance since I didn’t really lose very much, and you can bet that I’ll be more careful in the future. Hopefully any of you that are reading this will take the time to review the security on your internet exposed services and make sure that they are appropriately locked down and up to date. I know I am.

I guess a bright side to all this is learning that they must have thought it was a pretty kick-ass article, since they were asking for .2 BTC (~$4400 at the time of writing) to recover it. Though, in reality, I suspect this was done by a bot of some sort and .2 BTC is the standard asking price they put on a site like this. I would be very surprised if there was any human interaction involved given how quickly they found the exposed port and vulnerable credentials (if you can call ROOT with no password a “credential). Given my experience with enterprise firewalls and the logs that are generated, I know that there are many, many, MANY automated scans happening all the time looking for vulnerabilities, and perhaps I should be surprised that it took them an entire 2 hours to find it.

Leave a Reply