Blog

AKS Security

Everyone is working hard on the new platform and then someone asks.. “What about security?”

Published: 18 March 2021

You have a deadline and promised your customers to launch your new platform on a specific date, everyone is working hard and you’re barely meeting your target date. And then someone asks.. “What about security?”

It is likely that you built some kind of solution that will process and potentially store data. That data can be sensitive. But have you really thought of protecting that data and keeping malicious actors out? When you’re heavily investing in a new technology, security is something that not many people thing of from the start and is often implemented later in the project.

By default Kubernetes deployments are not that secure (let’s call it it “next, next, finish” configuration) and you will have to revisit your configuration multiple times and double check the working of your platform after implementing security countermeasures. Sounds like something you should do from the start? Definitely! Don’t worry, this happens quite often! In this article we will share some best practices that we recommend to customers and usually implement as early as possible in the project.

 

Authentication and authorization

By default AKS comes with a Kube Config file. This file contains the necessary information to connect to your clusters (including credentials). Having access to this file means you have access to the Kubernetes cluster. Definitely not a file you want to have laying around! To minimize this risk, we need additional authentication and authorization for the AKS environment. We can do this by integrating Azure Active Directory with AKS and require Azure AD group membership and authentication to login to the cluster.

It’s a fairly straight forward process but comes with many benefits. Once setup you have the ability to not only require Azure AD Authentication but also configure group membership requirements for accessing specific namespaces. For example, if you have 2 teams working on 2 different solutions in different namespaces you can only allow them access to the parts they need. As we are used to with Kubernetes this can be done using the well known YAML files, but instead of creating deployments, services or ingress rules we’re now deploying “Roles and Rolebindings”. The Role is used to configure what that role can access and the Rolebinding is used to apply access for users and groups based on that role.

It sounds very simple but it’s important that you configure this first thing after creating your cluster. If you start dealing and changing permissions afterwards, you can run into scenarios where deployments will break (what if someone is running the deployment pipeline from their user account with admin level privileges?).

Currently Azure Role Based Access Control for AKS, which allows for assigning Azure Roles to access your cluster is in preview but definitely keep a look out because this will simplify the authorization process even more.

 

Use Managed Identity

When you’re already using Azure, something that you might have come across is the concept of “managed identity”. When interacting with the Azure Platform itself (maybe you are using an Application Gateway, Azure Container Registry or Azure Key Vault), AKS needs an identity to access those resources. You have two options there: A Service Principal and a Managed Identity.

Using a Service Principal means you have to register one, provide it with access and store the keys to that service principal somewhere. Over time you might need to refresh the credentials and the Service Principal is something you need to create in Azure AD, provide it with the necessary permissions and configure it in AKS. This service principal has a life cycle you need to manage, keys need to be refreshed, you need to redistribute them to anyone who is using them and you definitely want to enable logging on the usage of the service principal. Yes, you could argue that nobody is allowed to use the Service Principal for anything else but in reality, there is a fair chance that someone has those credentials and might use them. 

The alternative for using a Service Principal is using Managed Identity. They are basically a wrapper around Service Principals and the life cycle management is done for you. Setting it up doesn’t take much more than setting up a Service Principal but you now get added benefits such as automatic key rotation (yes Managed Identities do this) and you no longer need to worry about updating the configuration of your cluster once you updated the service principal.

Creating a managed identity for AKS is done during the AKS deployment (this is one of those reasons why you want to think of security at the start of your project😊). It requires you to add a parameter to your deployment and Azure Resource Manager does the rest.

Keep in mind that these managed identities share the life cycle of your cluster. If you delete your cluster, the identities are deleted as well. Bringing your own identity is currently in preview for limited features within AKS but this will allow you to decouple the lifecycle of the identity from your cluster.

 

Azure Policy

Azure Policy is not just for managing the Azure Resource Manager side of things. Did you know you can also enable Azure Policy to make sure your Pod specifications comply with your requirements? Currently you can only use the built-in policies that Microsoft Azure Provides but there is no reason why you wouldn’t enable this. It’s out of the box additional security for your environment. To be fair, it does take up a little bit of your resources and these increase with the amount of pods in your cluster but it’s really not too bad. And worth to mention: This is also supported on hybrid scenarios with Azure Arc! You can see an overview of the supported policies here. These policies include the prevention of running privileged containers in your cluster, require HTTPS on your ingress configuration, prevent the use of public IP’s on services (use internal load balancers) or require the use of labels.

What you will be doing here is deploying policies to check for and enforce a lot of best practices that have been written in the previous articles and the articles to come. It’s little effort to configure this and takes away a lot of manual actions to double check compliance.

 

Network security

Implementing network security and network policies is something I find that a lot of people don’t do. And to be honest, during a project I’m also not the biggest fan. Why? Limiting access to a pod is troublesome if you are troubleshooting and explaining the customer how it all works. But that doesn’t mean you shouldn’t enable them, we just have to set everything up correctly, invest the right amount of time in it and get used to them. Because really, should every Pod have internet access or should every Pod be able to talk to another? The answer is usually “Nope”. That’s why we need to implement network security and prevent unnecessary traffic to take place. By creating a network policy definition you can achieve that scenario. Don’t worry, you don’t need to configure this for each individual pod (who wants to do that?!?). Again, labels to the rescue! You can allow or deny traffic based on a label.

But there’s more. We can also configure the namespace as some sort of a network security boundary by using a namespaceSelector in our network security policy we can limit traffic to just within the namespace. When combining this with Role Based Access Control, suddenly your namespace does become a security boundary instead of just a logical way to group your deployments.

 

Azure Defender for Kubernetes

We also need threat detection. And for many services on Azure, threat detection can be achieved by using Azure Defender.

Azure Defender for Kubernetes provides Run-time protection and comes on two different levels:

  • Host Level
  • AKS Cluster Level


Host Level

On a host level, Azure Defender checks for suspicious events. Think of connections to known suspicious IP Addresses, privileged container creation and SSH running inside of containers. Running Azure Defender on a host level requires the installation of the log analytics agent.


AKS Cluster Level

This level of monitoring is agentless and the protection is based on analysing the Kubernetes event logs. Suspicious behavior on a cluster level will be reported, this includes checking for highly privileged role creation, published dashboards to the outside world, etc.

We recommend always configuring Azure Defender for both Host and Cluster level. Yes it does require the installation of an agent but let’s be honest, how much insights do you really have into the security of your nodes by default?

This is something that is often forgotten, and the installation of an agent might hold you back. But having the log analytics agents configured isn’t only a benefit when it comes Azure Defender. Logging is a thing, and you need it 😊.


Private clusters

Let’s say you don’t want to expose anything publicly. Or well, you do but in a more controllable manner and you’re looking for a hub-spoke configuration to completely control the flow of networking. You can with Private Clusters. Yes, it does come with some limitations but if these are not the features you require then Private clusters is an option. Please keep in mind that you can’t convert a regular cluster to a private cluster, this is definitely something you want to determine before you start deploying your cluster(s).

Private clusters make sure that all traffic between the API server and nodes are happening within your private network. You will use Azure Private Link to set up communication between different services. Due to its limitations, private clusters are not used as much as regular clusters are but if you are looking at absolute isolate network traffic, route traffic through a firewall subnet/vnet (hub-spoke) and use more complex VPN setups then you should definitely look at private clusters.

Note that a combination of using networking security, pod security, vnet peering and networking security groups will also bring you a long way when it comes to isolating your environment and in general, we see that this is enough and the private cluster scenario is not always required.

 

Summary

There is much more to be learned when it comes to security on Kubernetes. This article is intended to give you an overview of what’s possible and what best practices you should take into account when deploying your cluster. Security is definitely something you want to plan for and implement as early in the process as possible. Each different practice can be a study on its own but I hope I gave you a good overview of the already available features you need to look at. And even if you implement them and run the out of the box configuration, it’s a start and will already improve your security.

 

This article is part of a series 

Read all about Ingress, Services, Pods and Namespaces in this follow-up article.

Read back previous articles? Click here:
1. The evolution of AKS
2. Hybride deployments with Kubernetes
3. Microservices on AKS
4. Update scenario's AKS
5. Linux vs. Windows containers


Sign up here for our Intercept Insights and we’ll keep you updated with the latest articles.


Vist our AKS workshop

Learn even more about AKS through our interactive AKS workshop. In 1.5 hours you will receive the benefits and best practices to make your environment more efficient. Through common AKS challenges you will be ready for AKS. Click here for dates and register!