To Be Persistent or not to be Persistent ... in Memory

Persistent memory fits into a system’s data storage hierarchy above permanent storage, such as disk and flash drives, but below DRAM.

Persistent memory fits into a system’s data storage hierarchy above permanent storage, such as disk and flash drives, but below DRAM.
Image adapted from “How Persistent Memory Will Change Computing” by Jeff Layton, Admin Network & Security. Image adapted from “How Persistent Memory Will Change Computing” by Jeff Layton, Admin Network & Security.

Non-volatile memory (NVRAM) is data storage that does not lose data if the system loses power. The most common example is flash memory. This has been available for a while, but storage is outside the system and has worse performance than conventional DRAM. On the other hand, persistent memory sits in the system DIMM slots but doesn’t lose data if power is lost.

Persistent memory fits into a system’s data storage hierarchy above permanent storage, such as disk and flash drives, but below DRAM, to paraphrase Jeff Layton in “How Persistent Memory Will Change Computing,” Admin Network & Security. The non-volatile nature of persistent memory means that it bridges the gap between storage that is either on the bus or outside the system and DRAM that is inside the system.

NVMe

SAS (based on the SCSI command set) and SATA (based on the ATA command set) are historic protocols developed for mechanical media. They do not have the characteristics to take advantage of the benefits of flash media. So, the industry created a standard, non-volatile memory express (NVMe), built for flash performance advantages.

NVMe is a standard based on peripheral component interconnect express (PCIe), and is built for physical slot architecture, according to the Computer Weekly article “Storage briefing: NVMe vs SATA and SAS,” by Antony Adshead. As they launched PCIe server flash products, suppliers each developed proprietary protocols to manage traffic. NVMe is a largely successful exercise in the replacement of such disparate proprietary protocols with a true standard. In short, NVMe provides much greater bandwidth than SAS and SATA, with vastly improved queuing. This means NVMe-equipped storage should not experience the performance degradation that SAS and SATA can go through if overloaded with I/O requests.

New Memory, New Code?

As HPC memory evolves, it promises to provide great performance gains as new technologies expand into the market—but the price to pay for those improvements is an increase in complexity for programmers who are developing supercomputing code, according to The Next Platform article “New Memory Challenges Legacy Approaches to HPC Code.”

In the article, Ron Brightwell, R&D manager at Sandia National Lab, is interviewed to explain how additional memory complexity affects existing programming complexity and architecture heterogeneity to make HPC a challenging use case for developers.

Brightwell co-authored “Principles of Memory-Centric Programming for High Performance Computing” with Yonghong Yan from the University of South Carolina and Xian-He Sun from the Illinois Institute of Technology. It is based on an SC17 conference workshop on memory-centric programming for HPC.

From the paper: “Memory-centric programming refers to the notion and techniques of exposing the hardware memory system and its hierarchy, which include NUMA (non-uniform memory access) regions, shared and private caches, scratch pad, 3-D stack memory, and non-volatile memory, to the programmer for extreme performance programming via portable abstraction and APIs for explicit memory allocation, data movement and consistency enforcement between memories.”

Brightwell says programmers need to be given a memory model to understand how to map to different memory hierarchies.

More Dell EMC Coverage

Dell EMC Company Profile

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


#21797