Hey
@aaron ,
Thank you for your insight. I can give a bit more details here.
PMem (NVDIMM) in AppDirect mode (storage) can either be wrapped in a regular block device with filesystem on top of it or exposed as NVDIMM and then formatted / mounted as "PMem aware" FS.
1) The benefits of using regular block device exposed and not PMem aware FS are all about compatibility - OS doesn't need to know it's working with PMem. The downside of it is performance hit as you still go though the OS file stack and use page files. While fast than many SSD, your latency is on magnitude higher than it could be with "Pmem" aware FS or applications using it directly as NVDIMM device.
2) If you expose it as NVDIMM, it means your PMem configuration tools, such as ndctl will recognize it as such and extra manipulation options become available. Then, on the VM itself, the device is recognized as NVDIMM and allows for "DAX" mount option, which makes the FS "PMem aware" and use DAX instead of traditional storage APIs / page files. It allows you to achieve Pmem-level io latency, in many cases 10x lower than using regular block device, which dramatically improves IOPS for random 4k read/write access. It also allows you to use libpmem directly in your apps.
Now, #1 is implemented in VMware as "Persistent memory storage type" (vPMemDisk), so you can use it as a main root disk for your VM. It only performs slightly better than fast NVMe SSDs, depending on configuration. I suspect what you suggested in your post may work for this scenario.
#2 is implemented in VMware as (vPMem), which is what that Intel article describes - passing through virtual NVDIMM.
I actually made it work yesterday with ProxMox, it's not fancy but it works. I used "args" option in qemu-server configs and it appears to work just fine but "memory hotplug" needs to be disabled:
args: -machine nvdimm=on -m slots=2,maxmem=1T -object memory-backend-file,id=mem1,share,mem-path=/pmemfs0/pmem0,size=100G,align=2M -device nvdimm,memdev=mem1,id=nv1,label-size=2M
So, if you are comfortable with manual edits and no sane way of migrating NVDIMM PMem VMs, you can certainly make it work with ProxMox. VMware allows for migration of vPMemDisk via Storage vMotion and vPMem as usually between machines with PMem installed.
Hope it helps