Blog

Checklist part 1: Choose your strategy before you migrate to Azure

Before you start your Azure journey it’s important to take a step back and think of what you’re trying to achieve. The first step to successfully modernizing your solution is to determine your strategy, short term and long term.

Initially this has little to do with your actual cloud solution. To determine your IT strategy you generally take into consideration what your customers want, where they are, the financial picture and so much more. To sum it up: where is your business going in the next few years and how are you going to support that from an IT perspective. And somewhere during that multi-year, thought-through strategy there’s a plan for continuously improving your solution to add value, innovate and stay ahead of the competition.

Strategy on modernizing

When it comes to public clouds there are many reasons you could (and should!) modernize your solution using the latest technology these platforms provide you with. For one, it’s much easier and takes less effort to implement the newest technology into your solution and the financial picture is much more interesting as compared to building from scratch.

Before you onboard your solution it’s time to think this through. Firstly, it’s important to understand the different scenarios that determine your cloud strategy and architecture. And secondly you need to determine how much you are willing to initially invest in your solution when making that move to the public cloud. For example: are you going to rehost (lift and shift) and stick with virtual machines for now or are you going to rewrite pieces of the solution in order to be a better fit for Azure native platforms such as Web Apps or Functions?

Different strategies come with different benefits, as long as you understand that modernizing your solution is an ongoing process and requires a financial momentum and the right technologies.

So if we are looking at those scenarios we can best describe them as follows:

  • “On and Off” scenario;
  • “Growing fast” scenario;
  • “Unpredictable bursting”;
  • “Predicable bursting”.

Then if we look at the steps for modernization we can choose the following migration strategies:

  • Rehost (lift and shift);
  • Refactor (repackage);
  • Rearchitect (optimize your code);
  • Rebuild (build cloud native).

Of course, you are not required to follow the steps accordingly, but you can choose where to start and where you want to end up. Perhaps you want to skip the lift and shift scenario and start with refactoring your application, for example by repackaging your application to run on containers.

Each step is described in depth on https://azure.microsoft.com/en-us/migration/get-started/#migrate

 

Let’s take a look at the four scenario’s that will help determine the best modernization strategy for your solution. As mentioned before it’s important to understand your business, your customers and the financial picture of your current situation. After all we want to compare apples with apples, so get them apples! Gather telemetry (we will do an extensive write-up on how to do this during this series), make sure your documentation is in order and, last but not least: identity your bottlenecks (migrating to the cloud must add value right?).

On and off scenario

The on and off scenario’s is typically the scenario that is the real money saver if you’re coming from a lift and shift migration and are running Virtual Machines (Infrastructure as a Service). With Microsoft Azure you only pay for what you use / provision. For compute resources such as Virtual Machines you pay by the hour and the hours start counting when the Virtual Machine resources are allocated. Depending on the month, we’re talking approximately 720 – 744 hours (that’s February excluded). 

Let’s say you have an environment where your usage is peaking during the day and experiences a drop outside of the office hours (or during the night). That means you need to provide your solution with the best possible resources during those peak hours and during the off hours not so much. If you were to deallocate those resources during the off hours you will greatly lower the finances required to keep your solution up and running.

For instance, if you were to have a Virtual Machine costing $0.088 per hour running for the full month this would set you back approximately $65 per month. Now if you would have a pool of these machines running (lets say 6) to serve your customers this would end up at $390 per month. If you created insights into your usage and understand your customer base (and their usage) you can greatly lower this number. Now let’s work out the following scenario based on usage (fictional):

  • Peak times: 09:00 -17:00
  • Average usage: 06:00 -09:00 and 17:00 – 21:00
  • Low usage: 21:00 – 06:00

If we were to implement the on and off scenario this would look like the following:

 

Hours per day

Hours per month (31 days)

Number of VM’s required

Total price (0.088 per hour per vm)

Peak hours

8

248

6

$131

Average usage

7

217

4

$76.38

Low Usage

9

279

2

$49.10

Total

 

 

 

$256.50

 

That’s a pretty big difference and if you were to really spend time to research the usage of your platform you could scale this back even further or choose to take the path of automation. This also works great for batch processing (if you know when your batches run).

 

Growing fast scenario

Growing or scaling your solution can be a complicated process and many companies experience quite the challenge when keeping up with their growth. So what we need is on-demand scaling. Let’s separate scaling from growth. If we’re talking about scaling, we’re looking at allocating the right amount of resources at the right time. Growth on the other hand has little to do with actual scaling (yes you do need the right amount of resources), we’re talking about deploying new versions of your software or identical versions to different regions (adding customers globally). This scenario pretty much applies to all companies who want to improve their time-to-market and deploy software close to their customers (for governance or latency purposes) without having to go through the manual installation process and customer onboarding process that you are probably doing right now.

If this is what you’re looking for then automation and standardization is key. We’re not settling for the 95% of deployment to be automated (those last few manual steps are usually the ones that consume the most time), we’re going for 100%. This requires everything in your deployment process to be standardized, from deployment templates to license files. Even though this sounds simple, the opposite is true. Standardization is probably the hardest thing to achieve in IT but it is worth it. We’ve seen customers with a 2-3 weeks deployment time go back to a customer onboarding time of just under 5 minutes.

As mentioned before, if this is what you need: invest in standardization.

 

Predictable and Unpredictable bursting

These two scenarios go hand in hand and probably have the biggest impact on what platforms your choose to implement when modernizing your application. It all comes down to understanding your user and application behavior. If it is likely you are going to experience an unexpected peak in demand (unpredictable bursting) then you need a platform that can do this for you and as fast as possible. This is were a big part of the Microsoft Azure Serverless proposition comes into play. Serverless services can scale virtually unlimited and automatically, without having to wait for enough resources to be provisioned (take a look at https://azure.microsoft.com/en-us/overview/serverless-computing/). When does this happen you say? Well for instance, if your platform is to be used to support first responders in scenarios you can’t predict (disasters or EMTs to name a few).

Predictable bursting on the other hand, is where you know what’s going to happen (peak demand) but you’re not really sure when (if you know when then the on and off scenario is for you!). Predictable bursting can be done with virtually any service on Azure. Combined with the right telemetry (for example using application insights) you can define a minimum and maximum amount of resources. The bandwidth in between can be used to scale based on the usage of your application and when the peak demand hits, Azure will automatically scale your instances to the required amount of resources. By defining a minimum and maximum amount of resources and setting a threshold on when to scale up and down (or in and out) you can keep your finances in check.

Wrap-up

Choosing the right scenario will determine your best strategy. Are you going with a lift-and-shift and is this right for you short-term? Or are you looking at more complex scenario’s where you will have more added value but require a bigger initial investment to get started. Based on where you are now and where you want to go, you can pick the right place to start. Whether it is lifting and shifting or rebuilding your application, you need to identify and investigate the different capabilities that Azure provides in order to determine your architecture. Not doing so will result in a short-term solution where you are probably adding customer value but taking the next steps will require a complete overhaul of your platform. Thinking it through and thinking short and long-term will help you build a future-proof solution and stay ahead of the competition.