Organizations are being told to digitally transform, to become more agile, and to respond to the business faster in order to survive in a highly competitive market. And one way organizations are digitally transforming is by modernizing their infrastructures, which means shifting from a traditional 3-tier architecture to a solution that integrates compute, storage, networking, and virtualization. Such a solution must deliver a more cloud-like experience on-premises, making the eventual transition to the cloud easier or better yet, enabling organizations to confidently move cloud-native applications from the public cloud back to an on-premises private cloud. Both converged infrastructure (CI) and hyperconverged infrastructure (HCI) fit the bill, but what is driving organizations to pick one over the other?
In our latest ESG research covering both CI and HCI, we asked midmarket (100-999 employees) and enterprise-class (1,000+ employees) organizations why they chose one over the other and results were interesting to say the least.
Why CI over HCI?
For organizations currently leveraging CI, it’s about maintaining what has worked for IT in the past without disrupting established and existing processes and workflows. CI offers organizations a way to continue to use the traditional 3-tier approach they know and love, but in a more consolidated way within a single cabinet. And of course, the potential to reduce the resource-specific management silos, while continuing to meet the scalability, reliability, and performance requirements of traditional mission-critical applications is very appealing. Additionally, the fact that resources are not directly tied to virtualization means hardware can be allocated or repurposed as independent resources outside of a virtualized environment.
Why HCI over CI?
Adopters of HCI appreciate the simplicity of deploying and managing a tightly integrated infrastructure anchored in software-defined constructs. Early adopters specifically were keen on leveraging initial deployments to handle what are deemed as tier-2 workloads (think VDI or email). In places where space is limited, IT staff is limited, and application demands are not as critical to the business, like remote office/branch office locations, HCI works well and will continue to do so. Finally, the entry-level cost, buying a two- or three-node system, is far more appealing to organizations that are looking to start small and grow over time.
Now it’s important to note that there is more to the story: For example, while HCI is thought of as being more appropriate for supporting tier-2 workloads, that does not mean that CI is unable to handle tier-2 workloads. In fact, CI handles tier-2 workloads just as well, if not better in some cases, but if that’s the only workload you need to support, CI may be overkill. In other words, you wouldn’t buy an exotic sports car that could go 0-60 in 3 seconds if the only place you drove was on side streets with a lot of stop signs. Additionally, HCI may be a more cost-effective platform to support tier-2 workloads, but that doesn’t mean it can’t handle the strict SLAs of mission-critical applications. As HCI continues to mature and prove it can in fact handle the requirements of mission-critical application workloads, more organizations are simultaneously running both tier-2 and tier-1 workloads, like mission-critical databases.
The most important takeaway in all of this is that there will not be a winner declared any time soon if ever. CI and HCI will continue to coexist for many years to come.