Containers have revolutionized development and are now a foundation for DevOps initiatives. However, containers carry complex security risks which may not be obvious. These risks can be exploited by organizations that fail to mitigate them.
In this article, we outline how containers contributed to agile development, which unique security risks containers bring into the picture – and what organizations can do to secure containerized workloads, going beyond DevOps to achieve DevSecOps.
Why did containers catch on so fast?
Containers are, in many ways, the evolution of virtualization. The goal was to speed up the development process, creating a more agile route from development through to testing and implementation – a method that’s more lightweight than using full-blown virtual machines, anyway.
The core issue here is compatibility between applications. Certain versions of libraries are required by certain applications, which can conflict with other applications. Containers fixed this problem and happened to link up well with development processes and the management infrastructure that drives these processes.
Containers take virtualization to the next stage. Virtualization abstracts the hardware layer, whereas containers abstract the operating system layer, essentially virtualizing the role of the OS. Containerization is a way to package applications in “containers”, which contain all necessary libraries for an application to work. It keeps applications from knowing each other because each one thinks it owns the OS.
Functionally, containers are quite simple – a container is just a text file with a description outlining which components should be included in an instance. Because containers are lightweight and simple, it is easy to deploy automation tools (orchestrations) throughout the entire development cycle.
DevOps for the win… but security matters too
Containers can significantly improve development efficiency and unlock DevOps. That’s likely one of the major reasons why containers have caught on so broadly, with Gartner estimating that by 2023, 70% of organizations will be running containerized workloads.
The process of developing, testing, and deploying apps used to be filled with obstacles, with a constant back and forth between developers and the teams looking after infrastructure. Containers allow developers to build, test, and deliver code in a working environment.
On the operational side teams merely execute this specification to create a matching environment that is ready to use. It works on my computer …” but that didn’t fix the problem. Today, developers don’t need this expression because there is no environment to troubleshoot.
So, yes, DevOps means rapid development. But there’s a missing component: security. DevSecOps is becoming more popular as it develops from DevOps. This is because security issues are not adequately addressed by the DevOps approach.
Containers pose several security threats
Containers make development easier but add complexity to the security picture. If you pack an entire operation environment in a container and distribute it around, you increase your attack surface. Any vulnerable libraries packaged with the container will spread these vulnerabilities across countless workloads.
There are several risks. One is a “supply chain attack” where a malevolent actor mounts an attack not by messing with your application, but by modifying one of the packages or components that is supplied with your application. Teams responsible for development must assess both the application being developed and each library that is included in the container configuration as a dependency.
Container security is also a concern for the tools used to enable containers, such as Dockers and orchestration tools like Kubernetes. These tools must be protected and monitored. For example, you shouldn’t allow root to be run by sysadmins. You should also ensure that your container registry are secure.
Kernel security at the core of container security
Some of the container-related security risks are less visible than others. Access to the kernel is required for every container. Containers are a form of process isolation. It is not difficult to forget that containers all rely upon the same kernel. This doesn’t mean that applications within the containers can be separated from one another.
The kernel apps see in a container is the same kernel the host depends on to function. There are a few issues. If the kernel on the host that supports the container is vulnerable to an exploit, this vulnerability may be exploited by starting an attack from an app inside a container.
The fact that all containers share the kernel means that any flaws in the kernel need to be fixed quickly or the vulnerability could spread rapidly.
But, again it boils down to patching
Keeping your host’s kernel current is an essential step to ensure safe and secure container operation. And it’s not just the kernel that needs patching, patches must be applied to the libraries pulled in by a container. It is not easy to patch consistently, but we all know that it can be difficult. That’s probably why one study found that 75% of containers analyzed contained a vulnerability that is classified as critical or high risk.
These vulnerabilities could lead to attacks such as breakouts where attackers rely on the library in a container to run code from outside. By breaching one container the attacker can eventually reach their intended target whether that’s the host system or an application in another container.
In the world of containers, maintaining secure libraries can prove to be quite a headache. Someone needs to keep track of new vulnerabilities and determine what’s been fixed. The process is laborious, but it also requires specialist skills which is something your organization would need to acquire if it doesn’t have them already.
Given the value of regular, consistent patching those reasons shouldn’t be enough to cause the sort of hit-and-miss patching routines that we see, but – particularly when thinking about the OS kernel – the disruption of the required reboots and the associated need to maintain downtime windows can significantly delay patching. Live kernel patching helps mitigate this problem, but it’s not yet deployed by all organizations.
Always include security goals within your container operations
It is common for new technologies to create problems in information security. New tools commonly lead to new and novel exploits. That’s true for containers too and while it doesn’t undermine the overall value of using containers in your workloads it does mean that you need to keep an eye on the risks posed by containers.
Educating your developers and sysadmins about the common flaws in container security and the best practices that mitigate these flaws is a start. Patching is another important aspect. As always, putting in place the right steps to mitigate cybersecurity flaws will help protect your organization – and allow your team to benefit from that cutting-edge tech without suffering sleepless nights.