X-IO Technologies: The Bearable Rightness of Being Unique

uniqueA few weeks ago I was at the X-IO Technologies HQ in Colorado Springs for some strategic discussions and to shoot a video with Bill Miller, the CEO. In the midst of a great conversation, one thing became clear: Bill's engineering roots are never far from the surface! Not that he cannot talk plain business value, but it was the power and special nature of the technology that first attracted him to join X-IO. So, spontaneously, I reset my camera and asked Bill to simply wax lyrical about what makes the technology special. "Unique" is a word that is both abused and over-used in our business—probably in many businesses—but X-IO has some legitimate reasons to use it.

Take a look at this video to learn about what—and how—this erstwhile start-up (in truth, something of a "vendor-teenager") delivers...both in direct technology and the key operational implications. Dave "Gus" Gustavsson—the CTO and SVP of Engineering—adds some details in the latter part of this video. By the way, this video is about 7 minutes in length and I should make it clear that I am entirely responsible for the lack of exciting angles and graphics—for once, I figured I'd let the protagonists and their explanations do the communicating! After watching,  please return to this written blog for a few closing thoughts from me. 

While I already pointed out that "unique" is a word that is in danger of losing its true meaning, having a lot of "two hands" to things is great way to look at X-IO:

  • The extreme reliability of its foundational ISE component was probably not—on the one hand—a perfect marketing gambit for Seagate at the time (that's where the ISE DNA was created), but on the other hand, it looks to be ideal for today's converged and virtualized world.
  • On the one hand, X-IO offerings can be delivered "plain" for the cloud or perhaps in an SDS implementation...but on the other hand, they can be delivered with functional flavors and sprinkles for a more standard on-premises implementation.
  • I mentioned X-IO's "teenage" nature; it's no longer a startup, but it also isn't fully established...it's history is therefore simultaneously a challenge to explain and also a source of proof points.
  • In the storage world, being a bit different can be simultaneously a good and a bad thing. On the one hand, it means X-IO can meet or beat most of the newer vendors in a straight performance race, while meeting or beating most of the older vendors on price/performance...but on the other hand, it means it's prospective customers have to reset their entrenched belief.

The bottom line is that X-IO's challenge is not its capabilities (after all, as Bill and Gus point out in the video, being able to affordably provide consistent, predictable, and high performance even in varied and demanding mixed workload situations is pretty attractive); the challenge is to drive awareness and consideration...and that's one area where X-IO is certainly not unique!

Video Transcript

Mark: Recently, I was at the X-IO Technologies head office to shoot a video interview with Bill Miller, the CEO. As I asked him about the company, its product differentiation and market fit, what became apparent to me was Bill's ability to actually talk technology in considerable detail. Since I had my video equipment, I asked him to simply do a whiteboard talk covering some key X-IO product attributes. This is that talk. Bill Miller unplugged, you might say. He's joined at one point by David Gustavsson, the CTO and SVP of engineering for some added commentary. The less than Hollywood production values are all down to me. I just wanted to capture it while I could.

Bill: The other newer players in the storage marketplace that I see today use one of two approaches to implementing storage solutions. They both, by the way, use off the shelf servers as the core hardware elements of their technology. And they have to make use of the limitations of those off the shelf servers just adding software on top of them. And the two approaches, one is clustered servers. The other one is a scale out shared nothing approach, which people also call hyperscale or web scale architecture.

The cluster approach, using off the shelf servers, use them in an active standby mode so that each of these servers actually can address their own volumes. And in a case where one of the servers or one of the controllers fails, the other one can take over for it. But it has to then connect to the other paths. In order to allow it to take over it, it has to have the recently written information on both servers. And the way this architecture gets there, cluster and off the shelf servers, is to do that write forwarding or write replication from the active server to the standby server on Ethernet. While this is a simple approach not requiring the companies doing it to build their own hardware, they can buy off the shelf pieces like off the shelf servers and Ethernet Nexus, they do suffer some penalties in terms of their ability to load balance between these actively, use all the capacity that's there. And then there’s real problem with write latency on write forwarding. 

The shared nothing approach or hyperscale or web scale approach actually does the same thing in terms of using Ethernet to forward writes from one server to another. So in this approach, it uses individual CPUs with flash or direct attached disks and multiple instances of those. And it uses load balancing at the software level to be able to allocate activity to any of these CPUs, which is nice for scale out. But if you have a very write-intensive or random write-intensive like OLTP kinds of application, all of the writes have to be forwarded to another server before they can be acknowledged and before the application can move on. So certainly, for things like database, transactional database, those kinds of applications, the write latency becomes a real problem with these.

Now, X-IO's architecture is really different in a way that doesn't really cost any more, but it provides some real advantages in terms of load, in terms of performance under demanding workload, and especially where it's a write-intensive and random write kind of activity, which you certainly see in multi-tenant data centers, so Cloud kinds of applications where the workloads become a little bit unpredictable, especially when you're using SQL databases or NoSQL databases, in some instances, with analytics overlays. And some of these more demanding use cases, X-IO's architecture provides much better performance and scalability performance at a lower cost.

David: One of the key attributes to make a strong performing storage architecture is to create a very, very fast data path. So what we did was we built a tightly coupled architecture where the CPUs are connected via PCI Express bus. This gives us a low latency, high bandwidth bus. But it also gives us a very, very powerful software architecture because we can memory map regions between the two controllers. So the software architecture works in a way that we have a cache manager that takes the data from one controller, immediately mirrors it over the PCI bus to the DDR memory on the other controller. As soon as we have an act of that mirror, we can then act back to the application, which drives us a tremendous low latency on writes. This allows us now also to decouple our IOstream from the front end to the back end in how we do write flushes and how the caching works in our architecture.

Shared memory model allows us to do an active active architecture where each controller can serve I/O for the same volume. So you can have a user volume presented from an X-IO ISE storage system. And it can be I/O going down to one controller or the other controller for the same volume. And by having ability to do that, you now have a naturally load-balancing architecture where CPU and memory bandwidth is utilized across both controllers to serve the same volume. And since a RAG architecture spread the data across all the backend devices in a spiraling striping fashion, it allows us also to bring the performance characteristics of all forwarded backend devices on to that one volume, which allows us to provide striping active active controller with single second takeover should you have a controller failover scenario.

The ISE hybrid architecture allows you to have an all-flash, a hybrid and an all-HDD architecture in the same box, which is very powerful to consolidate your data center. In the hybrid configuration, we use 30 hard drives and 10 SSDs. And then we fuse those media together with a technology we refer to as CADP. In a CADP architecture, the frequently used access data gets moved into a SSD block and served out of the SSD. From there on, it serves out of that tier as a persistent storage, meaning the data set that sits in flash behaves like an all flash array, which allows for applications to behave many times like they were running on an all-flash architecture even though it's a hybridized system. 

What we do in an iglu architecture is we separate the data services and allow for high level virtualization like snapshots, theme provisioning, asynchronous replications and those to be separated from device redundancy. Cache management, read modify write, RAID 5 write hole protection, all the fundamentals is better served in a lower modular unit. So by allowing an ISE to take care of the lower level fundamentals, you really separate also failure domain and failure scenarios, such that should you have a drive that needs drive rebuild, it is now contained in its own resource domain and managed by the resources dedicated to manage it, such that when you have a RAID rebuild, you don't impact the data services capability of the system.

Bill: So we like to talk about X-IO Technologies being adaptive because it adapts to these very demanding workloads, as well as our ISE being able to adapt to software defined or virtualized data center or by adding our Iglu SAN stack on top of ISE to be able to serve the traditional SAN storage needs and adaptive from all disks to hybrid to all-flash. Our customers really rave about how simple it is to deploy, how little it is in terms of an investment of people and time to manage, and balanced. Our architecture is really able to balance cost and performance and reliability better than anything else out there that we know of on the market today.

software defined storage insight

Topics: Storage