syrewicze_intro_00 This article was excerpted from Learn Hyper-V in a Month of Lunches by Andy Syrewicze and Richard Siddaway.



To effectively learn more about what a hypervisor is, we’re going to discuss it from the standpoint of the fictional company – XYZ Corp. Talking about hypervisors in this manner allows us to take a difficult conceptual topic (Like Hyper-V) and apply it to an everyday business issue. This makes it easier to see where this technology sits within your organization. Once we care about the why, we’ll follow it up with how this technology works from a ground up perspective, and continue to build pieces on top of this foundation.

What is a Hypervisor?

Hypervisors have been around for some time, but haven’t seen widespread adoption until several years ago, when IT professionals began to see the inherent benefits of virtualized workloads. VMware’s ESXi product has been around, in some way, shape or form, for the last 10 years and some mainframe systems have been utilizing this type of technology since the 1980s! Let’s look at this from a business perspective, using the example of our fictional company – XYZ Corp. which is approaching a refresh cycle. We’re doing it this way to make it easier to see why the industry has transitioned to virtualized workloads.

Setting the Stage

XYZ Corp. is a company that’s getting ready to make some expenditures on technology as part of a three-year refresh cycle. Their existing infrastructure has been in place for sometime, and no virtualization technologies are present. Cost is a major concern, as well as stability. The IT department has been given a limited budget for upgrades, but several of the physical machines are out of support and warranty, and a significant investment would need to be made to replace each physical box with another, more current, model.


Figure 1 A list of physical servers owned and operated by XYZ Corp.

The Physical Computing Model

To date, XYZ Corp. has used the older computing model of one role or function per physical server. This methodology has several flaws associated with it. Before we go any further, let’s look at their architecture. A list of servers and workloads owned and operated by XYZ Corp. is shown in Figure 1.

As you can see, XYZ Corp., has five physical servers. You can also see that each server generally follows the rule of one role or feature per physical box. Sadly, this method isn’t condusive to efficient use of processing power. Historically most systems would go through life running at only 15% utilization (on average), leaving the remaining 85% of the system with nothing to do.

There was never a blanket decree made that this was the way things had to be, but it was an unspoken rule that many system admins came up with. There’s a lot of code flying around in the background on any given system, and issues are bound to come up when this code over here doesn’t play well with that code over there on the system. This can cause system outages, and downtime equals money lost. The unspoken rule of one role per machine helped prevent this from occurring.

It was an acceptable increase in cost to buy additional physical servers to fill the roles needed on the network, beause at the time there wasn’t any mainstream technology that allowed system admins to make better use of the hardware available. Nor was software available to ensure the separation of workloads at the software level, which would prevent situations where pieces of code conflicted and created problems. Virtualization technology has solved this computing dilemma in the form of a hypervisor.

Making the Change to Virtualized Computing

With the advent of virtualization technology, XYZ Corp. could purchase one or two new physical servers that are more powerful than what’s been historically needed. These would place the physical workloads shown in figure 1 into isolated “virtual” containers on top of a hypervisor running on the new equipment. Yes, more powerful servers cost more than a similar, less powerful system, but not needing to purchase as many servers and maintain warranty and support on them usually allows you to come in below budget.

Let’s review the resoures that are currently in use by the physical workloads running in table 1 below.

Table 1 Total Physical Computing Resources Configured for XYZ Corp.

Workload Number of Physical CPU Cores Amount of Physical Memory in GBs Amount of Storage in GBs
Domain Controller 2 2 80
Domain Controller 2 2 80
Database Server 4 8 300
Mail Server 4 8 300
File/Print Server 2 4 500
Total 14 CPU Cores 24 GBs 1260 GBs


As you can see, it appears that the company needs a good chunk of resources, but remember, these are physical resources in the existing physical infrastructure, and as such, we can expect them to be under-utilized. XYZ Corp. should do some performance checking on each of the various systems to be more exact as to how *much* the system is under-utilized, but once the demand and some hard data are known, they can start sizing the new physical servers that the hypervisors will be placed on.

As an example, let’s say that after looking at the resource demand for each physical server listed above, XYZ Corp. determined that the *actual* resource demand for each workload (with a little wiggle room) is as shown in table 2.

Table 2 Projected Resource Demand per/Workload once virtualized for XYZ Corp.

Workload Number of Physical CPU Cores Amount of Physical Memory in GBs Amount of Storage in GBs
Domain Controller 1 2 50
Domain Controller 1 1 50
Database Server 2 6 200
Mail Server 2 6 150
File/Print Server 1 2 250
Total 7 CPU Cores 17 GBs of RAM 700 GBs


In this case, there are 7 less CPU cores needed, 7 less GBs of RAM  required, and 560 GBs of storage space not being used.

Without getting too deep into the weeds, taking into account the resource requirements listed in table 2 and adding the resource overhead required to run the hypervisor itself (Hyper-V in this case), XYZ Corp. now knows how powerful of a server (or servers) to procure. These new physical server(s) will be more powerful than the previous five physical servers, but will conduct the work that all five did previously as well.

Additionally, it’d be wise for XYZ Corp. to size out an additional 20% – 40% of resources to account for future growth, but this is by no means an industry defined percentage. The needs of the organization will dictate this more than anything.

The Virtualized Computing Model

Once the new hardware is purchased and arrives, Hyper-V will be installed on top of the new it, and the workloads will be migrated in their existing state into a virtual machine, or they’ll be migrated in a more traditional manner if the OS and/or installed software is being upgraded as well.

During this process, the hypervisor will allow XYZ Corp. to allocate each of their core computing resources (CPU, Memory, Storage, and Networking) to each virtualized workload. XYZ Corp. would allocate these resources using the projected resource demand they determined earlier in table 2.

Once completed, each workload is running entirely on one or two new physical servers, each in their own isolated virtualized environments. Keep in mind that, in this new configuration, every entity on the XYZ Corp. network that could talk to the workloads is still able to. Nothing has changed from a communications perspective. These workloads are still reachable on the network and the endpoints won’t know the difference with the virtualized workloads. Everything is business as usual for XYZ Corp.’s end users.

Once XYZ Corp. has fully virtualized its computing workload, the server layout would look like the layout in figure 2.


Figure 2 XYZ Corp. – Computing workload now virtualized

As you can see, the physical server footprint has been reduced to two physical servers with the five (previously) physical workloads running as virtual machines on top of them. These same workloads continue to function as though they’re conducted within the physical boxes. Again, this is the core function of a hypervisor.

It isn’t shown in this example, but a question that could arise during a project like this is whether all the company’s workloads are compatible with virtualization. Some workloads shouldn’t be virtualized, but they’re becoming fewer and farther between. Software vendors see the mass adoption of virtualization and they work to make sure that their products function on virtualized platforms, if they don’t already function properly on them. If you’re unsure, it’s best to check with each individual software vendor on a case-by-case basis.

Try out Hyper-V for yourself!

Find out more about Hyper-V by downloading the FREE first chapter of Learn Hyper-V in a Month of Lunches.