Where to start?
An interesting starting point is leveraging solutions such as Azure Migrate.
Azure Migrate is a service that helps you assess and migrate your on-premises or cloud-based applications to Azure. It supports various migration scenarios, including rehosting, refactoring, rearchitecting, and rebuilding. One of the scenarios that Azure Migrate can assist you with is containerizing your existing applications that run on virtual machines.
Containerising an application means packaging it with its dependencies and configuration in a portable, isolated unit that can run on any platform supporting containers. Containers offer many benefits, such as faster deployment, higher scalability, lower resource consumption, and improved reliability. However, containerising an application may require changes in the code, the architecture, or the deployment process, which can be challenging for some developers.
Azure Migrate for containerisation provides a guided and automated way to containerise your existing applications without requiring code changes. It uses Azure Migrate App Containerization, a tool that analyses your application, identifies its dependencies, creates a Dockerfile and a Helm chart, builds a container image, and pushes it to a container registry. You can then use Azure Migrate to deploy your containerised application to Azure Kubernetes Service (AKS), a fully managed service that orchestrates and manages your container workloads.
Azure Migrate for containerization can be a stepping stone into containerisation, as it provides a first experience you can build upon. You can use it to quickly and easily migrate your existing applications to containers and run them on Azure. You can also use it to learn the best practices and tools for containerisation, such as Docker and Helm. You can then modify or extend the generated artefacts to customise your containerised applications according to your needs.
Why a stepping stone?
Azure Migrate is unlikely to give you the production-ready solution you are looking for, but it will kickstart your journey.
The decision was made, and you started. What’s next?
First and foremost, it is crucial to understand the characteristics of containers and how these translate to your current solution. These concern:
- Scalability: One of the main characteristics of containers is that they are scalable, meaning that they can easily adjust to changes in demand and workload. Containers can be created, deployed, and destroyed in a matter of seconds, allowing for rapid scaling up or down. Containers also share the same underlying operating system and resources, which reduces the overhead and improves efficiency.
- Portability: Another benefit of containers is that they are portable, meaning that they can run on any platform that supports container runtime. Containers encapsulate all the dependencies and configurations of an application, making it easy to move them across different environments, such as development, testing, and production. Containers also enable consistent and reliable deployments, ensuring that the application behaves the same way regardless of where it runs.
- State: State or no state? Another characteristic of containers is that they have different states, meaning they can store data internally or externally. Containers that store data internally are called stateful containers, and they persist data even after the container is stopped or deleted. Containers that store data externally are called stateless containers, and they rely on external services or volumes to store data. Stateless containers are more suitable for scaling and portability as they do not depend on the state of the container itself.
- Orchestration: In bigger environments with multiple containers running, containers require orchestrators in bigger environments, meaning they need tools to manage and coordinate multiple containers across multiple nodes. Orchestrators, such as Azure Kubernetes Service (AKS), provide features such as service discovery, load balancing, health monitoring, configuration management, and security for containers. Orchestrators also automate the deployment, scaling, and recovery of containers, making it easier to handle complex and distributed applications.
Now, these are all amazing characteristics; some would call them features. However, your solution should support these. When we look at scalability, for example: Are you looking for the best possible availability and running multiple replicas of your backend API?
That would be a good choice, but are you sure this is supported? Will you not run into a piece of legacy code that prevents you from running multiple instances of your API and results in duplicate database entries?
It concerns things that need to be tested for and adjusted. The most important thing is accepting that when refactoring your application to run inside containers instead of a traditional virtual machine, you will not implement all the best practices at once. Instead, you will adjust the infrastructure to overcome limitations and maybe even use stateful containers to have less scalability at the start.
You will still enjoy the benefits of containers, such as fast deployments and self-healing concepts of Kubernetes, just not all of them, and that is okay. It provides you with a very clear roadmap.
Wrapping up:
In this article, we have provided a brief introduction to containerization, its benefits, and the necessary adjustments to make this transition successful.