In almost every IT infrastructure, speed of access to data is one of the most important factors. The development of the SSD helped with the insatiable desire for data access speed, but even that wasn't enough for the most demanding uses, like financial transaction applications or big data analytics. Enter the nonvolatile memory express protocol.
Unlike SAS or SATA-based SSDs, an NVMe SSD connects either through the PCI Express (PCIe) bus or new connectors like M.2 or U.2. The NVMe protocol, attached via these more direct connections, allows for lower latency, greater IOPS and even a reduction in power use.
An NVMe SSD is a great boon if you are running your high-performance application on a single PC or play high-end video games. But enterprises run such applications over a distributed infrastructure that usually includes a NAS or SAN implementation. The next logical step in NVMe was to extend that protocol outside the PC and across various network fabrics, Fibre Channel and even Ethernet.
NVMe over Fabrics uses remote direct memory access, which reduces the overhead of data handling compared to other, more traditional distributed storage connections. While it isn't possible to have a remote connection as fast as any direct-connected device, the goal of NVM Express Inc. in defining NVMe over Fabrics was to add no more than 10 microseconds of latency to the storage system compared to a directly connected NVMe SSD.
The extension of NVMe across a distributed network isn’t the final word for the protocol. NVMe, as the name implies, works for any form of nonvolatile memory, whether it is based on NAND flash, like an NVMe SSD, or some other form of persistent memory.
As NAND flash gives way to persistent memory based on phase-change technology or spin torque magnetoresistive RAM, the NVMe protocol is poised to help connect them to CPUs in future systems.