ESG's Scott Sinclair discusses the Big Memory movement with Charles Fan of MemVerge.
Read the related ESG Blog(s):
Scott: In the digital era, if we're going to reach our full potential, we need to rethink the way we design the technology that serves as the foundation for modern business. Joining me today to discuss a possible new inflection point in IT architecture and design is Charles Fan, co-founder and CEO of MemVerge. I've really been looking forward and really excited to talk to you ever since I heard of the idea about big memory.
So, thank you so much for joining us. For those watching who may not be familiar, what is big memory, and what can you tell us about it?
Charles: Sure. Thank you for having me here, Scott. We are super excited about the big memory movement. Data is growing very fast, in particular real-time data. And this is data growing both in capacity and in velocity, big and fast at the same time. And when you have data like that, storage will be too slow, memory will be too small.
So what we need is a new kind of memory, the big memory, that has a capacity of storage, but that can operate at memory speed.
Scott: We've been talking about, you know, data for decades, and a lot of that started with this idea of big data, which was of velocity and variety, and I think memory is a really fascinating way to think about solving one of the key challenges of IT infrastructure design. Tell us about MemVerge. How do you see yourself fitting into this big memory movement?
What challenges are you trying to solve?
Charles: What happened over the last year and a half is Intel and Micron collaborated on a new technology, 3D XPoint, that give you the memory speed at the capacity that's higher and with also the nonvolatility, that can survive power cycles, and that's what makes big memory possible. But, it also have some weakness or shortcomings.
With a typical workload of applications, they run somewhere between 10% to 40% slower if you run on this type of memory versus DRAM. Secondly, to obtain memory using it as a persistence media, you have to rewrite your app to a new API. So these are the two, kind of, obstacles for the easy adoption of this technology.
And what MemVerge did is we developed a software we call Memory Machine. What Memory Machine is, it's a virtualization layer that virtualizes DRAM and PMEM, and presenting a software-defined memory service to the applications, so that number one, the application do not have to rewrite, because we are DRAM compatible.
So you are just using us just like DRAM, except it's bigger and cheaper. And secondly, we optimize performance so that the combination of DRAM and PMEM is operating at DRAM speed. And thirdly, perhaps the most interestingly, we developed data services on top of this software-defined memory. We are developing our first data service, we call ZeroIO Snapshot, that allows the entire data associated with an application to be saved and snapshotted right inside of memory, so it can happen instantly, without incurring IO, moving to storage, and we can do this repeatedly, to enable a number of interesting features, including time travel for the applications, including crash recovery, after, you know, your big memory application crashes, and including cloning.
You can create additional instances of the same application, using our in-memory snapshot capabilities. So that's what MemVerge Memory Machine does.
Scott: I've always been enamored with the idea of persistent memory. One of the key challenges is people just hate rewriting apps, even if the infrastructure can really radically change what those apps can do. And so the ability to do that, to me, just, first off, makes tremendous sense, but, you know, if I think about all the data services that you're laying on top of memory, that opens up a new window of all these things I have to think about.
So, if I, you know, if we pull out our crystal balls and we kind of look into the future, think about what a future of IT and business look like in a big memory world, you know, help define that for us. What does that look like?
Charles: We've been living the world where you have memory and you have storage. And whether you're dealing with iPhone or a PC or a server, that's always the case. And data is being constantly shifted between these two things. And often, this data movement, these IOs, are the bottlenecks for an application. We are seeing a future where these IOs will be no longer necessary.
When everything is in memory, you don't need to move it to the other places. When the memory can be big enough, you can have hundreds of terabytes of memory at your fingertips, and memory can be made highly available. It's going to be a brave new world. The application performance going to jump through the roof, and many things that wasn't possible before will become possible.
I think the $30 billion performance storage market as we know it may not exist anymore in 10 years. And it's a new big memory software market that's going to emerge.
Scott: Well, I'm sure there are many people that are probably very close to you that would be looking forward to that new world. And with that, I want to thank you, Charles. This has been great, incredibly insightful, and to our viewers, thank you for watching.