There’s no doubt that Kubernetes adoption has increased a lot since its first release. But, as Ian Coldwater said in his talk about abusing the Kubernetes defaults: Kubernetes is insecure by design and the cloud only makes it worse. Not everyone has the same security needs, and some developers and engineers might want more granular control on specific configurations. Kubernetes offers the ability to enforce security practices as you need, and the evolution of the available security options has improved a lot over the years.
Today’s post covers a few suggestions on what can you do to make your Kubernetes workloads more secure. I won’t go deep in many of the topics—some of them are pretty straightforward, while others need more theory. But I’ll make sure you at least get the idea of why and when you need to implement these practices. And I’ll provide links for further reading in case any of these are the right fit for you.
Enough words. Let’s get into the details.
Disable public access
Avoid exposing any Kubernetes node to the internet. Aim to work only with private nodes if you can. If you decide to run Kubernetes in the cloud and use its managed service offering, you can disable public access to the API’s control pane. Don’t think about it, just disable it. An attacker that has access to the API can get sensitive information from the cluster. You can either use a bastion host, configure a VPN tunnel, or use a direct connection to access the nodes and other infrastructure resources. And in the cloud, look for disabling servers’ metadata from pods with network policies—more on this later.
If you need to expose a service to the internet, use a load balancer or an API gateway and enable only the ports you need. Always look to implement the least-privileged principle, and close everything by default.
Implement role-based access control
Stop using the “default” namespace and plan according to your workload permission needs. Make sure that role-based access control (RBAC) is enabled in the cluster. RBAC is simply an authorization method on top of the Kubernetes API. When you enable RBAC, everything is denied by default. But you’ll be able to define more granular permissions to users that will have access to the API. You’d first start by creating roles and assigning users to those roles. A role will contain only allowed permissions, like the ability to list pods, and its scope applies to a single namespace. You can also create cluster roles where the permissions apply to all namespaces.
I suggest you read the official docs for RBAC in Kubernetes to learn more about its capabilities and how to implement it in your cluster.
Encrypt secrets at rest
Kubernetes architecture uses etcd as the database to store Kubernetes objects. All the information about the cluster, the workloads, and the cloud’s metadata is persisted in etcd. If an attacker has control of etcd, they can do whatever they want—such as revealing secrets for database passwords or accessing sensitive information. Since Kubernetes 1.13, you can enable encryption at rest. Backups will be encrypted, and attackers won’t be able to decrypt the secrets without the master key. A recommended practice is to use a key management service (KMS) provider like HashiCorp’s Vault or AWS KMS.
Configure admission controllers
After a request to the Kubernetes API has been authorized, you can use an admission controller as an extra layer of validation. An admission controller may change the request object or deny the request. As Kubernetes usage grows in your company, you need to enforce specific security policies in the cluster automatically. For example, enforce that containers always run as unprivileged users or that containers pull images only from authorized image repositories and enforce the usage of images that you’ve analyzed before. You can find other policies on the official Kubernetes docs site.
Implement networking policies
Similar to admission controllers, you can also configure access policies at the networking layer for pods. Networking policies are like firewall rules to pods. You can limit access to pods through label selectors, similar to how you might configure a service by defining label selectors for which pods to include in the service. When you set a network policy, you configure the labels and values a pod needs to have to communicate with a service. Another notable scenario is the one I mentioned before about the attacker accessing instance metadata in the cloud. You can define a network policy to deny egress traffic to pods, limiting the access to the instance metadata API.
Configure secure context for containers
Even if you’ve implemented all of the previous practices I mentioned before, an attacker can still do some damage through a container. Because of Kubernetes and Docker architectures’ nature, someone could potentially have access to the underlying infrastructure. For that reason, make sure you run containers with the privileged flag turned off.
There are other tools and technologies you can use to increase security in the cluster by adding another layer of protection like AppArmor, Seccomp, or gVisor. These types of technologies help by sandboxing containers to run securely in regards to other tenants in the system. Although these are still emerging practices, it’s worth it to keep them in mind.
Segregate sensitive workloads
Another option is to use Kubernetes features like namespaces, taints, and tolerations to segregate sensitive workloads. You can apply more restrictive policies and practices to those workloads where you can’t afford the luxury of a data breach or service downtime. For instance, you can tag a cluster of worker nodes (node pool) and restrict who can schedule pods to those nodes with RBAC roles.
Scan container images
Avoid using container images from public repositories like DockerHub, or at least only use them if they’re from official vendors like Ubuntu or Microsoft. A better approach is to use the Dockerfile definition instead, build the image, and publish it in your own private image repository where you have more control. But even though you can build your own container images, make sure you include tools like Clair or MicroScanner to scan containers for potential vulnerabilities.
Enable audit logging
At some point in time, your systems may get infected. And when that happens (or if it happens) you better have logs to find out what the problem is and how the attacker was able to bypass all your security layers. In Kubernetes, you can create audit policies to decide at which level and what things you’d like to log each time the Kubernetes API is called. Once you have logs enabled, you can work on having a centralized place to persist these logs. Depending on the tool you use to persist logs, you can configure alerts, send notifications, or use webhooks to automate a patch. For instance, you might set an immediate action like terminating existing pods in the cluster that could have been affected.
If you’re running in the cloud, you can enable audit logging in the control plane. This is true at least for the three major cloud providers: AWS, Azure, and GCP.
Keep your Kubernetes version up to date
Last but not least, make sure you’re always running the latest version of Kubernetes. You can see the list of vulnerabilities that Kubernetes has had in the CVE site. For each vulnerability, the CVE site has a score that tells you how bad the vulnerability is. Always plan to upgrade your Kubernetes version to the latest available. If you’re using a managed version from cloud vendors, some of them deal with the upgrade for you. If not, Google published a post with a few recommendations on how to upgrade the cluster with no downtime. It doesn’t matter if you’re not running on Google—all the advice they give applies independently of where you’re running Kubernetes.
That covers a good range of Kubernetes security best practices that everyone should consider. As you’ve noticed, I didn’t discuss many of the topics in too much detail, and by the time you’re reading this post, Kubernetes might have published another feature to increase security. For example, admission controllers is a feature that went live a few months ago. If you’d like to dive deeper into the current state of Kubernetes security options, I’d suggest you go and read the Kubernetes official documentation for more in-depth recommendations on securing a cluster. And in case you’re a podcast fan like me, there are two good podcasts from Google’s Kubernetes Podcast where they talk about Kubernetes security and how to attack and defend Kubernetes.
This post was written by Christian Meléndez. Christian is a technologist that started as a software developer and has more recently become a cloud architect focused on implementing continuous delivery pipelines with applications in several flavors, including .NET, Node.js, and Java, often using Docker containers.