Recently I tweeted about the extremely impressive cost-performance I/O (both latency and bandwidth) numbers that DataCore has achieved in recent SPC-1 testing with its Parallel Server offering, but I want to drill a little deeper into what the results mean for our industry. Even before I reiterate a few of the highlights and get to my comments, let me just make it clear that I absolutely understand the limitations of testing such as that done under the auspices of the SPC parameters. Of course we all know that real world data center numbers are not going to be the same as the laboratory tests; but equally this testing does give an idea of the relative attributes across and between systems.
If you like, it’s akin to the MPG on a new car — we know that neither the 56 MPG car nor the 21 MPG car are really likely to achieve those 56 or 21 numbers often (or at all), but we also know that the former car is going to be way more fuel-efficient (most likely by 2-3x) than the latter. Standardized tests, such as the SPC ones, are based on a level playing field, typical apps, and agreed/declared parameters. Moreover, the SPC tests are agreed to and used by just about everyone in the business, so they are at least a fair yardstick for relative real-world comparisons, even when not a guarantee of absolute real-world delivery. OK, enough on that!