Besides the intelligent design and the fact that it solves problems related to distributed, scalable, fault-tolerant, and highly available systems, Kubernetes’ power comes from adoption and support from a myriad of individuals and companies. You can use that power, as long as you understand that it comes with responsibilities.
It’s up to you to choose how will your Kubernetes cluster look like, and which components it’ll host. You can decide to build it from scratch, or you can use one of the hosted solutions like Google Cloud Platform (GCE) Kubernetes Engine. There is a third option though. We can choose to use one of the installation tools. Most of them are highly opinionated with a limited amount of arguments we can use to tweak the outcome.
You might be thinking that creating a cluster from scratch using kubeadm cannot be that hard. You’d be right if running Kubernetes is all we need. But, it isn’t. We need to make it fault tolerant and highly available. It needs to stand the test of time. Constructing a robust solution would require a combination of Kubernetes core and third-party components, AWS know-how, and quite a lot of custom scripts that would tie the two together. We won’t go down that road. At least, not now.
We could, for example, create a new cluster that would be used only for testing purposes. While that is indeed a good option in some situations, in others it might be a waste of resources. Moreover, we’d face the same challenge in the testing cluster. There might be multiple new releases that need to be deployed and tested in parallel.
Another option could be to create a new cluster for each release that is to be tested. That would create the necessary separation and maintain the freedom we strive for. However, that is slow. Creating a cluster takes time. Even though it might not look like much, wasting ten minutes (if not more) only on that is too much time. Even if you disagree and you think that ten minutes is not that much, such an approach would be too expensive.
Every cluster has a resource overhead that needs to be paid. While the overall size of a cluster affects the resource overhead, the number of clusters affects it even more. It’s more expensive to have many smaller clusters than a big one. On top of all that, there is the operational cost. While it is often not proportional to the number of clusters, it still increases.
Having a separate cluster for all our testing needs is not a bad idea. We shouldn’t discard it, just as we should consider creating (and destroying) a new cluster for each new release. However, before you start creating new Kubernetes clusters, we’ll explore how we might accomplish the same goals with a single cluster and with the help of Namespaces.
The moment we create a “real” cluster where the whole company will collaborate (in some form or another), we’ll need to define (and apply) an authentication and authorization strategy.
If your business is small and there are only a few people who will ever operate the cluster, giving everyone the same cluster-wide administrative set of permissions is a simple and legitimate solution. More often than not, this will not be the case.
Your company probably has people with different levels of trust. Even if that’s not the case, different people will require different levels of access. Some will be allowed to do anything they want, while others will not have any type of access. Most will be able to do something in between. We might choose to give everyone a separate Namespace and forbid them from accessing others. Some might be able to operate a production Namespace while others might have interest only in the one assigned for development and testing.
The number of permutations we can apply is infinite. Still, one thing is certain. We will need to create an authentication and authorization mechanism. Most likely, we’ll need to create permissions that are sometimes applied cluster-wide and, in other cases, limited to Namespaces.
Those and many other policies can be created by employing Kubernetes authorization and authentication.