From rattling cages to moving goalposts — DataCore & the storage industry

datacoreNot so long ago, I wrote a blog about the stupendous industry benchmark numbers that DataCore had achieved. Well, they've just recently outdone themselves. More on that in a moment, but this blog has a wider intent than just noting its "stupendouser" (!) numbers.

What I had wanted to remind ourselves of for a while now is the tremendous ability we have — in IT generally and storage specifically — to surpass what most of us even think is realistically possible in the short term. Many people like to point out the almost comically small amount of computing power on the Apollo moon landing missions, or the fact that we now have exponentially more computing ability and connectivity just on our wrists (and with a choice of colors!).

When I first started my blog I tagged it with the commentary that my decades in the data storage business had, if nothing else, proved to me that big enough is never big enough and fast enough is never fast enough! And neither progress nor the passage of time have altered that truth!

Whether the ever-improving abilities of storage are about supply driving demand or vice-versa (the answer is probably a bit of both!) what I should by now have truly appreciated is that in data storage, we should remove pretty much all finite limitations that our finite brains tend to put on both what's needed and what's possible. I knew this when I saw DataCore's latest benchmark results; but I saw another R&D news story that helps put my whole point in perspective.

DataCore first:

The essence of what DataCore is able to do is not so much that it is delivering the worlds fastest storage system1, nor that it can get to levels of cost-performance that knock others into a cocked hat, but that it can do both with the same technology. Its software has always been efficient but now it is boosting its ability with parallel processing (which, both ironically but also reassuringly, was actually part of the original focus of DataCore).

A couple of headline results make the impact very clear. DataCore now leads the SPC-1 IOPS rankings with over 5 million, which makes its numbers from just a few months ago seem so twentieth century! Only one other tested product gets over half that number (achieving some 3 million IOPS) but its response time is more than 3X that of DataCore and it costs more than four times as much!

The upshot is that in the best performing SPC-1 products the $ per SPC-1 IOP is between circa 3X and >50X for Datacore's competitors! No one else comes close on either response time or system cost, let alone both together. So, for instance, the competitive systems that are closest on cost-per-IOP have response times that are 8 times or more that of DataCore. It really is like comparing a modern tablet to an early mainframe in just about every regard.

And now from the "what might be around the technological corner" news desk:

Scientists from the Technical University of Delft have published details in the journal "Nature Nanotechnology" (I'm referencing what I saw in a BBC news article online on 7/18/16) that demonstrates a 2-3 order of magnitude improvement (compared to today's best solid-state) in rewritable storage density by storing information according to the positions of individual chlorine atoms.

To put the implications of this in context this density would allow all books ever created by humans to be written on a single postage stamp... or the entire contents of the US Library of Congress to be stored in a 0.1mm cube! The current drawback? It's only stable at a temperature of 77 Kelvin (that's -196C). But, as Steve Erwin of he Naval Research Laboratory in Washington DC commented in the same journal: "a functioning high-density atomic-scale memory device... will, at the very least, stimulate our imaginations towards the next such milestone".

My point? In our particular area of technology the horizon of what's possible never gets closer. It will sound crazy today but why are these Dutch researchers limiting themselves to individual atoms? Surely we could do multi-bit or vertical recording or some type of HAMR per atom to drive the capacity up? The unlikelier it sounds, the more I like my heart to believe it will happen; whatever my head says to the logical contrary. It even makes me hesitant about pouring praise upon what DataCore has just achieved! After all, rattling competitive cages is one things, but when you move the goalposts who knows how the game will play out?

1. As measured by SPC benchmarks, which are both valuable and flawed (see the prior referenced blog post for more on this) but remain just about the only decent — and industry accepted — comparative tool we have for storage.

storage spending trends

Topics: Storage datacore