Recently I tweeted about the extremely impressive cost-performance I/O (both latency and bandwidth) numbers that DataCore has achieved in recent SPC-1 testing with its Parallel Server offering, but I want to drill a little deeper into what the results mean for our industry. Even before I reiterate a few of the highlights and get to my comments, let me just make it clear that I absolutely understand the limitations of testing such as that done under the auspices of the SPC parameters. Of course we all know that real world data center numbers are not going to be the same as the laboratory tests; but equally this testing does give an idea of the relative attributes across and between systems.
If you like, it’s akin to the MPG on a new car — we know that neither the 56 MPG car nor the 21 MPG car are really likely to achieve those 56 or 21 numbers often (or at all), but we also know that the former car is going to be way more fuel-efficient (most likely by 2-3x) than the latter. Standardized tests, such as the SPC ones, are based on a level playing field, typical apps, and agreed/declared parameters. Moreover, the SPC tests are agreed to and used by just about everyone in the business, so they are at least a fair yardstick for relative real-world comparisons, even when not a guarantee of absolute real-world delivery. OK, enough on that!
So what has DataCore achieved? Based off its “secret sauce” of parallel I/O processing, it has delivered the third highest number of SPC-1 IOPS ever (ahead of some impressive all-flash arrays and only trailing two mammoth-sized and mammoth-priced alternatives). Perhaps more significantly (certainly for the vast majority of users that don’t actually need millions of IOPS) DataCore is delivering the IOPS with stunning – and second-to-none — low latency and cost-per-IOPS. What do we learn from this?
- Clearly DataCore doesn’t understand the “rules” of the storage world! Smaller vendors — even ones with a rich history of innovation and a sizeable field-proven user base like DataCore — are not supposed to do things like this. It’s either the “big dog” or “edge-case-special-use-propeller-head-outlier” vendors that get the privilege of leading the pack in such abilities. DataCore is neither, but has nonetheless gone ahead and done it.
I expect the storage industry equivalent of an HOA-rep will be making a call to the DataCore HQ in Florida to point out how it has transgressed the unwritten rules of its covenant-controlled community!
- At a much higher level, and from a longer-term perspective, DataCore’s achievement reminds me of something I think many of us know in an academic sense, but forget in the day-to-day flurry of IT life; and that is that we have never arrived in IT, instead we are always en route. And the room for improvement is not only continuous but can be dramatic.
I have been around storage for many decades; yet even when I was new to the business the end of HDDs (they were 14” diameter then!) was foretold as imminent due to such things as the super-paramagnetic effect. Three decades later, with vertical recording, new materials, shingling and HAMR etc., it would seem that the HDD providers didn’t get that memo either! Remember when there was a “rule” that each storage admin had a set amount of capacity that he or she could manage!? Today that sounds as crazy as having a person holding a red flag to walk in front of early trains and cars!
- Another reminder is that marginal improvement — or being just a bit better than something current — doesn’t really cut it in storage or IT, whatever we might like to think. Users do not really sit up and get truly excited about a 5 or 10% improvement in anything. Yeah, of course it’s nice, and they continually demand and enjoy such incremental improvement, but it is not the sort of percentage improvement that causes change, of minds or of actions.
You really need to be 5x or 10x “better” (whatever the measure) to drive awareness and change – DataCore is delivering I/O at an average latency (with 100% system load) of 0.1m/s when just about everyone else is a millisecond or more; while in terms of cost-per-IOPS DataCore — being based on “vanilla” hardware — beats its nearest rivals (all, not surprisingly, custom “big iron” systems) by anything from 7x to 17x, and that’s just the nearest challengers!
- DataCore is doing us a favor by reminding us not only of what’s possible, but also that what’s possible can be truly dramatic, and come from materials (today’s hardware) and places (yes, Fort Lauderdale) that might surprise us. Certainly the big shifts in storage and IT — cloud, SDS, and convergence — matter a lot, but the path forward that DataCore is lighting is one that could be usefully deployed in all those models, and clearly demonstrates that value can be achieved in many ways.
Cages are definitely rattled!