The Virtual Workload Silo-ization Syndrome (The Webscale Imperative)

fiber-opticMany data center environments are plagued today by what I’ll call virtualized “silo-tary confinement.” This means that virtualized application workloads are often segregated or containerized within discrete pools of compute, storage, and network resources. For example, a virtual desktop environment may be configured with server, SAN/NAS storage, and networking hardware, while an Exchange infrastructure may utilize a totally separate pool of virtualized resources. This tendency to silo virtualized workloads drives up costs, increases management complexity, and hinders business agility; flying directly in the face of why organizations virtualized their server infrastructure in the first place. 

One alternative IT consumption model that many organizations continue to embrace is converged and hyper converged infrastructure. These rack ready, virtualized infrastructure in-a-box solutions make virtual machine (VM) deployments much simpler. They can also enable the multi-tenancy attributes necessary to eliminate the workload silo-ization issues described above. The challenge is that while some of these offerings can enhance resource utilization and improve efficiencies, they too can become silos of infrastructure over time - more efficient silos perhaps - but fundamentally, still silos.

One all too common scenario is for the converged platform to have plenty of excess headroom from a compute and networking perspective but be totally maxed out on storage capacity (or some combination thereof). At this point you can either make the decision to start deleting data or to buy another converged platform - neither are ideal choices especially when there are expensive resources going unused. 

The fundamental problem is that it's very difficult, if not impossible, to accurately predict your current application workload requirements from a CPU, storage, and network resource standpoint; never mind what those requirements are going to be over a 12, 24, or 36 month timeframe. 

Enter scale-out architecture. Over the last several years we’ve seen broad market adoption of scale-out storage systems to address this very issue. The idea is simple yet elegant. Rather than pack as much processing, storage, and networking connectivity into a single, large monolithic frame, deploy them instead into an interconnected, nodal architecture that can scale linearly in a “just-in-time” fashion to meet what the storage resource requirements are in near real-time. 

While scale-out storage is a great way to improve storage efficiencies, one way to further advance this concept is to include the compute and networking resources into the same scale-out nodal design to achieve what the market has dubbed “web-scalability.” This would give IT planners a way to sustainably deploy IT resources across all the available virtualized workloads in the data center. Moreover, if the system was designed so that data would automatically rebalance across the cluster so that any node in this virtualized fabric could access data rapidly without the need for “chattiness” between nodes, then users could get great application performance and IT planners would not be burdened with performance management, tuning, re-balancing workloads and resources, etc. 

At the risk of incurring the wrath of some folks in the virtual blogosphere, I am, of course, referring to Nutanix’s approach to hyper-converged infrastructure. Nutanix has designed their offering to enable multiple enterprise workloads to operate simultaneously on their solution while still enabling virtual administrators to leverage all of the hypervisor feature functionality (like VM migrations, HA, load balancing, etc.) that they’re accustomed to.

This means that I/O intensive applications like Oracle, SAP, Exchange, and VDI can operate concurrently on the same Nutanix cluster without I/O contention. Furthermore, IT planners don’t have to plan for disruptive upgrades as the system can gracefully scale out by adding nodes on the fly whenever there is a need for additional virtualized resources.

Nutanix has also integrated a virtualized resource management framework (called Prism) that provides in-depth analytics on performance trending and resource utilization so that administrators can more accurately predict when there will be a need for upgrades. Prism also can integrate with 3rd party tools via a REST API so organizations can utilize Nutanix across hybrid cloud infrastructure.

The web scale capabilities that the titans (Google, Facebook, Amazon, etc.) of the tech industry have used for years to scale-out their virtualized environments, will likely become an operational imperative for businesses to de-silo their data centers and complete the “last mile” of their virtualization journey.

Many organizations still remain reticent about moving their most critical business applications off of bare metal infrastructure for the fear that inserting them into a shared environment could compromise performance. Nutanix’ offering could be the solution that helps push those remaining workloads over the virtual goal line while giving businesses a sustainable model for achieving web scalability in their data centers.


network security analysis