It's the New Year, time to take stock of things and talk about what we all expect for our areas of coverage in 2016 and beyond. You'll see plenty of prognostications about the big thematic elements and trends from all of us at ESG about now; indeed I just shot a video with my colleague Scott Sinclair on our views for the technologies and market dynamics to watch in the storage space — you should see that published in the next few days.
There's no doubt that there are massive changes taking place across all of storage and data management; however, this huge focus on all the "paradigm shift layers", also made me stop and ponder that not every user - indeed, actually very few - gets the opportunity to start with a completely green-field data center, a bleeding-edge philosphy, and an unlimited budget.
And not every user wants to leap 100% into the deep-end of any given new approach — be that cloud, convergence, SDS or flash (to pick a few current and obvious favorites!). There is always room for the targeted and incremental improvement in specific areas, especially if the cost and risk can both be kept at a low level.
PrimaryIO is a good example of a start-up that is offering just that; it has a laser-focus on driving [more] IO faster for virtualized server environments, without asking anyone to buy into either new hardware or a wholesale new way to doing things. Recently I sat down with John Groff, the cofounder and COO at PrimaryIO, and asked him to explain in a nutshell what his company is doing and why. Quality over quantity (whether in terms of PrimaryIO's focus or the IO needs of most users) can sometimes be a very good thing! Of course, ironically, if users can get their "primary IO" (pun intended!) handled fast and optimally — this is the "quality" aspect — it will actually free up bandwidth and storage capacity — that is, quantity — elsewhere in the overall system.
Woman: The following is an ESG video blog.
Mark: PrimaryIO has a name that is pretty self-descriptive, as it provides the ability to drive IO harder, well, optimally might be a better term, in order to accelerate application performance.
I spoke briefly to one of the newer software entrants in the storage arena about the essence of what the company's abilities are, and why those abilities matter.
John: One of the themes that we continue to hear from our customers is that they would like to take advantage of virtualization for all the workloads that they're running in the datacenter, but they don't do it because they can't stand the performance impact that virtualization has on those workloads.
What we've done is we've developed technology that allows the customer to be able to take advantage of virtualization, by improving the performance of these virtualized workloads, workloads like SQL Server, Oracle Database, MySQL, MongoDB. So, the key to solving this problem is by using our software, our solution, to identify the primary IO stream from the secondary IO stream, and store the primary elements of these applications on high performance storage located on the host.
And by doing that, we're able to highly utilize the resources that are available, and thus increase the performance of the overall system. The impact that the customers see from using our solution is that they're saving money, because they're virtualizing, they're consolidating these workloads, and being able to save money while doing so, taking advantage of all the virtualization opportunities that exist. And eventually, being able to make more money because they've lowered their costs to deliver those services.
Storage performance is invariably the anchor on system performance. So, by improving the performance of the primary IO, you thereby improve application performance. So, the payoff from all of this is delivered in two forms. One is to deliver higher system throughput, which is about making money. And the other is to better utilize your resources, which is about saving money.