TL;DR: Ihis post isn’t very in-depth, because I don’t want to get on any hype train. But it took me ages to get the point of containers, and how using them is different from nothing or VMs. IMO, they’re just another tool to enforce the environment/separation you want. In an age where security is more important than ever but software seems to be getting more complicated, these tools can save you a lot of time and maybe even damage.
This guide on How to Protect Your Infrastructure Against the Basic Attacker is excellent, and even includes many, many howtos. But there’s more than one way to skin a cat.
For those of you (non-sysadmins?) that read that post and said “sounds complicated”, fear not! Many of these points can be easily put into practice with containers. For example, the whole section of “Application/OS Hardening” is basically privilege separation - containers do this well. They also make enforcing the “Firewalls and Networking” easy, get the port wrong and nothing works.
I think most Docker articles I’ve ever read gloss over why and for what to use containers massively. Articles about BSD Jails are slightly better at explaining this. Containers are not VMs. Ideally you’ll have one process/application per container. The best applications to containerize are ones that use TCP/IP to connect to other applications, because getting data in and out of a container can be a bit tricky. Thanks to the rise of nginx as a reverse proxy, containerization of web-applications is simpler than ever.
Of course, all these rules are useless if not applied. For this, you’d use IT automation instead of doing it manually - after all, you want to re-apply these rules on any server consistently. I recommend Ansible, but there are others like Salt, Chef, or Puppet. Ansible is especially useful, because you don’t need to install anything on the client. This is invaluable for bootstrapping containers (which I’m sure can be done with Chef, but it’s effort).