Application modernization efforts are pushing traditional infrastructure to its limits in an era where businesses across nearly every industry depend upon data and applications to not only continue operations, but to also create new opportunities. High performance improves customer experience, accelerates business operations, and can deliver superior business results. Technology is no longer simply a passenger for business operations, it is the now the engine and often the driver as well.
To support this new role, architectures must evolve. The traditional process of moving data to and from the processor has stayed relatively consistent for a while now. Data is moved from a relatively slow but persistent tier of storage into a much faster, non-persistent tier of memory, the processor goes to work on the data, and then the output is put back into the memory, and then moved back to the slow, persistent storage tier. For years, we as an industry have looked at that sequence and said “Why can’t we just keep data in the fast tier? Why move it at all?” Realities of cost constraints, and application limitations, and that lack of persistence have held us back. Now the rise of persistent memory solutions and the Big Memory movement is poised to change all that.
In my interview with Charles Fan, CEO of MemVerge, we discuss Big Memory, MemVerge’s vision to make persistent memory accessible, and discuss what the future of IT might look like in a world where persistent memory is widely available. I hope you enjoy our conversation.