Sometimes, it helps to look at new technologies that will be adopted far in the future and ponder what the higher level implications are. NVM (non-volatile memory) is one of them, discussed at the SNIA NVM Summit in San Jose, California. Before you zone out, though, note that if this technology gets widely adopted, it will be as disruptive as SSD is for storage, and may change data centers in ways that server consolidation (enabled by virtualization) did in the last decade.
The NVM systems are still at demo or early stages, as shown in the photo to the right of an AgigA demo station, but there's a tremendous amount of investment and interest. NVM stands for non-volatile memory, but you can think of it as memory (fast, like RAM) that can replace storage (big, like hard disks). So some people refer to one use-case of NVM as storage class memory.
Implications for applications
The fastest conventional storage systems handle hundreds of thousands of IOPs. NVM-based systems can be an order of magnitude faster — at over a million IOPs. This means that current apps will run much faster, and a whole new class of apps will be enabled. However, it's more important not to be complacent and enjoy the faster speeds at lower costs. Edward Sharp, a speaker at the summit, said that in the late 1990s, a 1-terabyte NetApp system cost about one million dollars. If that storage capacity was considered enough in 2016, then we would be happily paying little for equivalent storage infrastructure, but we wouldn't have innovated to exploit new apps enabled by ever faster and larger storage.
So it's important for software applications to evolve and drive innovation, as enabled by new underlying technlogies like NVM. We're not just talking about operating system software such as Linux or vSphere, but we need to look at higher-level applications (such as big data) to drive it. Unlike the storage changes previously enabled by SSD, which simply dropped into where disk drives (with some memory cache) were, NVM will require rearchitecting software to full exploit it, so the changes may take longer to become widespread — but the resulting changes can be quite significant. It also means that the hardware designers must be more aware of the software requirements (such as SQL databases) that place demands on their hardware components. There's a great need for these two worlds to collaborate.
Implications for infrastructure
Having one piece of the infrastructure suddenly become an order of magnitude faster means the balance of different parts of the technology stack changes. Servers spend a lot of time waiting for data to be fetched from disk, or shuffling data around, trying to make up for the fact that hard disk storage is slow. Once this limitation is removed, fewer servers may be needed to do the equivalent tasks, and one person speculated that 10 servers can do the job of what 50 servers do today. Server virtualization created a similar reduction in server counts in data centers, so NVM can also change the footprint of future data centers.
It also affects networking, since technologies such as RDMA (remote direct memory access), pioneered by companies such as Mellanox, promises to use networks to access memory in remote servers. We can expect programs will access memory not only its own server, but other servers in the rack. So the boundaries of what a computer is, will change. It will also change the notions of what hyperconverged infrastructure is, and makes networrking participate fully into how the system interacts, as opposed to simply being an interconnect between systems. That's because future servers will have its compute section reach into remote storage (NVM or persistent memory), so high performance networks will be a critcal component to make the workload perform properly.
The joke during a session was that most people don't care about this - it's too esoteric. Plus on top of that, the people in this conference are even a smaller fraction of the small population who do care about NVM. This illustrated the fact that technology for its own sake it not enough. Having said that, they all knew that this summit has grown significantly over the years (the ballroom shown in the photo below was packed full), large companies (HP Enterprise, HGST, Intel, Microsoft, Red Hat, Samsung, Toshiba, VMware, etc.) are investing resources into this. It's nice to see the "S" in Silicon Valley (the real area, not the TV comedy show!) continue to make dispruptive changes, and world-changing innovations are not just from ride-sharing companies or self-driving cars. Now that we caught our breath on the changes that SSD has made, let's plan to check back in the year 2020 to see how widely NVM technology will be used (that's the year one speaker predicted it will move from niche to mainstream). We may see changes that are bigger than was SSD was able to do recently.