Blog

Going hybrid with Kubernetes

One of the motivations for using containers and Kubernetes I come across are “We want to be multi-cloud”. But how easy is it to achieve a hybrid or multi-cloud scenario?

Published: 12 November 2020

There are some technologies that can help you from a management perspective (more on that later) but to truly achieve a multi-cloud / hybrid strategy you need to understand the capabilities of the different platforms. You can run Kubernetes on-premises or hosted in different clouds. And let’s be honest, container technology does sound really promising if you’re looking to support multiple platforms. It just gets a little bit more complicated if you throw in orchestration solutions like Kubernetes.

Hosting on different clouds can have multiple motivations. Maybe some of your customers want the solution hosted on AWS, others might prefer Azure. Either way, it’s wise to understand what’s out there and thus what your options are. To make it even more complicated, Kubernetes itself and  public clouds are ever evolving and new technologies and support for new features are added rapidly. What might be unsupported today may very well be supported next month.

Please note that we’re looking at the Kubernetes side of things. Building a hybrid / multi-cloud solution also requires your to look at the differences in terms of legal, compliance and security because those will also impact the use case for you and your customers.

Different Technologies

First of all, we have Kubernetes. This is the platform itself. You can install it yourself or run it in a public cloud environment (hosted). There are lots of hosted Kubernetes solutions out there but the most popular ones you will come across are hosted in Azure, Google or Amazon Web Services. To make it easier (ahem) these clouds have their own Kubernetes deployment with integrations native to their platform.

On Amazon we have Amazon Elastic Container Service for Kubernetes (Amazon EKS) on Google we have Google Kubernetes Engine (EKS) and on Azure we have Azure Kubernetes Service (AKS). All of them are running a version of the original Kubernetes but not all support the same functionalities. Functionalities usually differ on the level of platform integration. Think of autoscaling, pricing and monitoring. On the other side, public clouds sometimes have features for Kubernetes specific to their platform that is not available on other clouds or on-premises. For example, Azure Policy integration with Kubernetes is not natively available on other clouds. More on that later. Those seem just simple differences but when it comes to enterprise scale clusters this can mean the worlds difference. Especially if you want to have an identical configuration across multiple clouds.

Pipelines

If you have decided on the platforms you want to use you will get to the more challenging part of technology. Deploying solutions to your cluster can be done in many ways but it’s definitely wise to standardize your deployment across environments.

A good rule of thumb is; make sure all your environments support the features you need. If one platform supports more features than the others. Either don’t use it or manage your custom deployment.

Definitely standardize with one deployment technology though, and not just for containers but also for Kubernetes redeployments. Having your Kubernetes deployment scripted and automated means you are ready of scaling across different regions and already have a large part of your disaster recovery scenario in place.

Standardizing your container / pod / solutions deployment is fairly simple. Either use Helm or the native Kubernetes Yaml files, they should work similar if not identical across different platforms (make sure your Kubernetes version matches tho). The actual cluster deployments on the other hand.. That is a different story.

On Azure you would go with Azure Resource Management Templates or Azure CLI. On AWS you might go with the technology that works best on AWS and on GKE you will go for something different as well.

You basically have two options:

  • Implement using the native technology specific to that cloud;
  • Use a more generic solution like Terraform or Pulumi.

Both options have pro’s and cons. Using the native technology for a cloud means you have support for the latest features. Using a generic solution like Terraform or Pulumi means that the language to deploy to each cloud is identical (deployment scripts will differ as they are specific to each platform) but usually means you can’t use the latest feature.

Either way, you have to decide before you start deployment. Are all the features you need available in those third party solutions? Go with that. Are you looking to use cutting edge features? Invest in understanding the different native deployment technologies for those clouds.

There is no “right” answer here, all I can say is “research, research, research and plan”.

Azure Arc

If we look beyond deployment and focus on managing your cluster in a hybrid scenario, Microsoft has released an amazing feature in 2019 and it’s still evolving rapidly. Azure Arc is built for hybrid scenarios, not just Kubernetes but way more.

Even though Azure Arc is still in preview and features can be limited depending on what you need, the future is bright!

By enabling a Kubernetes cluster for Azure Arc, this cluster will become visible in the Azure Portal. This allows for a single control plane and an integration with native Microsoft Azure features. Pretty cool huh? To get into the technicalities of it, an agent will be deployed in the azure-arc namespace. This agent will be responsible for the communication with Azure. Currently the all CNCF certified clusters (AKS-Engines) are supported. The following have been successfully tested by the Azure Arc team:

  • RedHat OpenShift 4.3
  • Rancher RKE 1.0.8
  • Canonical Charmed Kubernetes 1.18
  • AKS Engine
  • AKS Engine on Azure Stack Hub
  • AKS on Azure Stack HCI
  • Cluster API Provider Azure

(source: Microsoft Docs)

So.. What features do you have available when you enable azure-arc for your cluster? Well currently you see your Kubernetes resources running through the Azure Control Plane for inventory, tagging, grouping, etc. We’re talking one single overview here for all your clusters.

Additionally, you can deploy applications and configurations using your GitOps based management and there is support for Azure Monitor and Azure Policy for Kubernetes.

 

Wrap Up

Hybrid deployments may sound like the holy grail at first, but it does take some planning before you get started. You have multiple platforms to manage and multiple technologies to understand. So how could you go about it? Let’s sum it up:

  • Research the (private) cloud platforms you need to use by doing market research and determine your technical requirements;
  • Build your deployment pipeline, make sure you can deploy to multiple clouds. Again, choose the right technology for deployments (will it be that clouds native deployment technology or are you going to use a third party solution?
  • Use Azure Arc for creating a single control plane across environments

Of course, there is more to it depending on the complexity of your environment but these items will always come back and greatly impact the usage and manageability of your solution. You definitely don’t want to end up with 3 different monitoring dashboards and 3 different deployment code bases and 3 different teams to support the 3 different clouds.

 

 

This article is part of a series 

Read all about Microservices on AKS in this follow-up article.

Read back the previous article? Click here: The evolution of AKS


Sign up here for our Intercept Insights and we’ll keep you updated with the latest articles.


Vist our AKS workshop

Learn even more about AKS through our interactive AKS workshop. In 1.5 hours you will receive the benefits and best practices to make your environment more efficient. Through common AKS challenges you will be ready for AKS. Click here for dates and register!