Over the last few years, hyper-converged systems have become all the rage, and for good reason. These systems can provide a number of benefits that go a long way toward making an administrator’s life easier. Even so, hyper-convergence isn’t necessarily the perfect solution. As is the case for any other IT solution, there are some disadvantages to adopting hyper-converged infrastructure. This article runs down some of the more significant pros and cons associated with the adoption of hyper-convergence.

Simplified IT

One thing that’s always amazed me about IT is how time-consuming it is to deploy a new server. I’m not talking about the act of installing an OS and an application; that part is relatively easy. I’m talking about all the planning, budgeting and so on.

One of the big selling points for hyper-converged systems is that they’re designed to greatly simplify the deployment process. All of the hardware (compute, network and storage) is sold as a cohesive bundle (along with a hypervisor and management software). The hardware is performance-matched by the vendor to prevent any single hardware component from becoming a major performance bottleneck. The hardware components are also certified to work together, which removes any concerns about compatibility.

Hyper-converged systems are also designed to be easy to deploy. I wouldn’t go so far as to describe the deployment process as plug-and-play, but there is typically some sort of automated setup process that helps to deploy the hypervisor, management tools and any other software that might have been included with the system. Each vendor has its own way of doing things, but the initial deployment and configuration process are usually simpler than the typical experience with a do-it-yourself system.

While I’m on the subject of simplification, some hyper-converged system vendors also attempt to simplify ongoing maintenance. Almost every software vendor provides periodic patches, and these patches have the potential to break things.

Some hyper-converged system vendors attempt to mitigate this problem by periodically providing updates that include any required patches, and that have been tested and certified to work with the hyper-converged systems. You still obviously have to handle your own VM-level patching, but it’s great when a vendor takes the work out of lower-level patching.

Eliminate Vendor Finger-Pointing

Some people I’ve talked to in recent years consider this one to be relatively insignificant, but I actually think it’s one of the greatest benefits to migrating to a hyper-converged environment.

Although things have gotten a little bit better over time (at least in my experience), there was a point several years ago when vendor finger-pointing almost seemed to be the norm. What would start out as a call to an application vendor over a seemingly simple problem resulted in the vendor blaming the OS configuration. A call to the OS vendor resulted in them blaming the application vendor or the hardware vendor. Of course, the hardware vendor would dutifully blame the OS vendor. As many know from firsthand experience, all this vendor-blame shifting wastes a lot of time and often results in IT having to figure out their own solution to whatever problem they’re experiencing.

Hyper-converged systems eliminate a lot of this. Although the systems include hardware, a hypervisor and management software, the system is sold as a single product. This means that administrators only have to deal with one vendor, so if something goes wrong, there’s one point of contact for technical support. All the hardware and software is certified to work together, so vendor finger-pointing should be kept to a minimum.

Keep in mind, however, that migrating to a hyper-converged system probably isn’t going to completely eliminate blame-shifting, because hyper-converged systems aren’t usually bundles with line-of-business applications or guest OSes. Even so, application vendors typically make OS requirements very clear, and OS vendors certify their wares to function with specific hypervisors, so the finger-pointing should hopefully be kept to a minimum.

Predictable Scalability

Another big benefit in migrating to hyper-converged systems is that they deliver predictable performance, scalability and costs. The reason for this has to do with the use of standardized hardware.

When I talk to people about hyper-convergence and standardized hardware, I like to use the analogy of video game consoles. Whenever I purchase a new game for my Xbox One, I don’t have to wonder whether the Xbox meets the game’s minimum hardware requirements. The gaming console uses standardized hardware and, therefore, is guaranteed to work with the game. Furthermore, if I want to take the game to a friend’s house, I know that the gameplay will be exactly the same on their Xbox as it is on mine, because both devices use the same internal hardware.

This same basic concept applies to hyper-converged systems. They consist of standardized modules, which can be nodes or appliances, depending on which vendor’s system you’re using. The nice thing about this approach is that it allows you to purchase the needed capacity and then add additional capacity later on (if you end up needing it) simply by installing some additional modules. This approach has been described by some as “pay as you grow” because it allows organizations to purchase capacity on an as-needed basis.

Now, for the “predictability” part. If each module consists of standardized hardware, it becomes very easy to predict both the cost of an upgrade and the additional performance or capacity that will be achieved by the upgrade. This is especially true for virtual desktop infrastructure (VDI) deployments.

I’m simplifying things for the sake of this example, but in a VDI environment, an administrator might determine that each node in his hyper-converged system is able to host a specific number of virtual desktops. Now, suppose that the organization decides to hire some new employees and needs to host 200 new virtual desktops. Because the nodes within the hyper-converged system are standardized, the administrator knows exactly how many virtual desktops each node will be able to accommodate, as well as the level of performance that can be expected and the cost per virtual desktop. The use of standardized hardware makes for completely predictable scalability (at least in this case).

Hyper-Convergence: 3 Reasons to Move