As I discussed in Motive, Means, and Opportunity for Evaluating HCI Performance, performance is a key HCI buying criteria.
ESG’s technical validation practice uses a variety of tools to measure theoretical performance, focusing on understanding how the HCI system handles different workloads and where potential bottlenecks can arise. Using simple tools, we can quickly exercise a single resource to evaluate maximum capabilities, for example the storage subsystem’s transactional performance, data throughput, and response time.
Measuring a single resource provides an incomplete picture of HCI performance since these environments share resources. Thus, we use theoretical performance results to set expectations for real-world performance measurements, where we focus on understanding how the HCI system performs running real applications and workloads, such as OLTP databases. The challenge with real-world measurements is the time and effort required to properly configure, validate, and measure the performance with real datasets.
We have measured both theoretical and real-world performance of converged and hyperconverged solutions, including Pivot3, Nutanix, Dell EMC XC, Cisco Hyperflex, Cohesity, Microsoft HCI, EMC ScaleIO, VMware Virtual SAN, IBM VersaStack, Riverbed SteelFusion, Cisco Springpath, and EMC Vblock.
The recent release of TPCx-HCI from the Transaction Processing Performance Council (TPC) provides us with a new tool designed specifically for measuring real-world HCI performance. In addition to the workload specification, TPCx-HCI includes the application and test harness, greatly reducing the time and effort necessary to collect real-world performance measurements.
Follow us and look to our interactive research portal to find the results of our latest HCI performance evaluations, as well as our validations of cloud, networking, storage, cybersecurity, data management, big data, and data protection solutions.