According to my colleagues Mike Leone, Edwin Yuen, and Terri McClure, organizations are now confident enough in HCI that they’re deploying HCI as their primary infrastructure housing their tier-1 applications. Thus, buying criteria has evolved from answering “Can this offering support my applications?” to “How well can it support my application?”
Of course, how well an HCI solution supports your applications is related to performance, and the three important performance criteria are:
- Speed: How fast can my applications perform on a shared infrastructure?
- Scalability: What happens as my applications and supporting infrastructure grow?
- Stability: How is my application impacted if a failure occurs?
How do we realistically evaluate these distinct aspects of performance?
One method we use is to configure lightweight workload generators such as IOmeter, FIO, Diskspd, or VDbench to simulate the behavior of business applications. While we can simulate a variety of HCI workloads including databases, file servers and e-mail, these tools are designed to evaluate the raw performance of the storage subsystem with minimal stress on CPU and memory. An additional challenge with these tools is building the testing environment and automating the execution and collection of benchmarks running on multiple virtual machines in parallel. HCIbench is a VDbench automation wrapper from VMware with a goal of solving that problem.
We also use higher level workload generators such as HammerDB, Swingbench and SLOB. These tools use actual database engines with software to simulate typical database workloads (e.g., OLTP, data mining, etc.) These workload generators do a better job of emulating real-world HCI workloads by exercising the entire stack, including CPU and memory. Other high-level workload generators that we use include Login VSI, which is designed to benchmark virtualized desktop infrastructures (VDI), and JetStress, which emulates a Microsoft Exchange server environment. While high-level workload generators can be used for stress testing the entire HCI stack, they’re not purpose-built for HCI testing.
With the recent release of TPCx-HCI from the Transaction Processing Performance Council (TPC), we now have a benchmarking tool that’s specifically designed to measure HCI performance. TPCx-HCI simultaneously stresses the critical components of HCI, providing a much-needed faithful rendition of real-world HCI workloads. TPCx-HCI runs high-level database workloads on each individual virtual machine while maintaining a constant load across the entire cluster, mimicking real-world unbalanced scenarios and measuring HCI cluster speed. As is the case for all TPC benchmarks, TPCx-HCI results will include price/performance data and will be industry audited before they are published.
Historically TPC has published benchmark specifications, but not the tools themselves. TPC has departed from that tradition by creating a freely available test harness that automates the setup, execution, and collection of TCPx-HCI benchmark results. This simplifies and standardizes the process of submitting and publishing industry audited results and should shorten the time to the first publicly available benchmark results.
With TPCx-HCI in our toolbox, we can evaluate and compare HCI speed, scalability, and stability, along with other HCI specific concerns such as noisy neighbors, HCI operating system overhead, and the impact of storage cluster rebuilds. Follow us to see how the latest HCI solutions fare in this hot new take on measuring performance in real-world environments. Learn more about TPCx-HCI and keep an eye out for the first publicly available benchmark results here.