[SOLVED] How do you set up a VM with NVMe interface?

piero.proietti

Renowned Member
Mar 12, 2010
16
0
66
Roma, Italy
www.piojoris15.com
To do some testing on a Linux installation program of mine, I would like to configure a VM with NVMe to test the operation on it.

I think I have done this previously - about a year ago - probably by editing the VM configuration files, but I don't remember and wasen't able to find informations.

Can anyone provide me with an example,
 
There is currently no way to do this directly in PVE via the GUI, but as you guessed, it can be done via the args parameter. Something in the form of this should work:

Code:
-drive file=/path/to/nvme1.img,if=none,id=NVME1 -device nvme,drive=NVME1,serial=nvme-1

More information can be found on the QEMU Wiki.
 
I tried this, commenting the line:
Code:
#scsi0: local:103/vm-103-disk-0.qcow2,iothread=1,size=32G
and adding the line:
Code:
-drive file=/var/lib/vz/images/103/vm-103-disk-0.qcow2,if=none,id=NVME1 -device nvme,drive=NVME1,serial=nvme-1

This is the configuration file on /etc/pve/qemu-server/103.conf

Code:
boot: order=scsi0;ide2;net0
cores: 2
ide2: local:iso/egg-of-debian-bullseye-colibri_amd64_2023-06-10_1157.iso,media=cdrom,size=1012M
memory: 4096
meta: creation-qemu=7.2.0,ctime=1686595998
name: NVMe
net0: virtio=56:E7:8A:FA:B5D,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
#scsi0: local:103/vm-103-disk-0.qcow2,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=4cbb8d83-65cc-47da-a9ba-328120731ed3
sockets: 2
vga: qxl
vmgenid: 0d190ab4-059f-458d-9bcc-59ac4f4667c2
-drive file=/var/lib/vz/images/103/vm-103-disk-0.qcow2,if=none,id=NVME1 -device nvme,drive=NVME1,serial=nvme-1

Not working...

Code:
args: -drive file=/var/lib/vz/images/103/vm-103-disk-0.qcow2,if=none,id=NVME1 -device nvme,drive=NVME1,serial=nvme-1

OK, not is working, great!!! Thanks
 
Last edited:
I tried this, commenting the line:
Code:
#scsi0: local:103/vm-103-disk-0.qcow2,iothread=1,size=32G
and adding the line:
Code:
-drive file=/var/lib/vz/images/103/vm-103-disk-0.qcow2,if=none,id=NVME1 -device nvme,drive=NVME1,serial=nvme-1

This is the configuration file on /etc/pve/qemu-server/103.conf

Code:
boot: order=scsi0;ide2;net0
cores: 2
ide2: local:iso/egg-of-debian-bullseye-colibri_amd64_2023-06-10_1157.iso,media=cdrom,size=1012M
memory: 4096
meta: creation-qemu=7.2.0,ctime=1686595998
name: NVMe
net0: virtio=56:E7:8A:FA:B5D,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
#scsi0: local:103/vm-103-disk-0.qcow2,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=4cbb8d83-65cc-47da-a9ba-328120731ed3
sockets: 2
vga: qxl
vmgenid: 0d190ab4-059f-458d-9bcc-59ac4f4667c2
-drive file=/var/lib/vz/images/103/vm-103-disk-0.qcow2,if=none,id=NVME1 -device nvme,drive=NVME1,serial=nvme-1

Code:
args: -drive file=/var/lib/vz/images/103/vm-103-disk-0.qcow2,if=none,id=NVME1 -device nvme,drive=NVME1,serial=nvme-1



 
  • Like
Reactions: piero.proietti
At last I added bootindex=1 to have boot, and started it.

This is the final working configuration:

Code:
args: -drive file=/var/lib/vz/images/103/vm-103-disk-0.qcow2,if=none,id=NVME1 -device nvme,drive=NVME1,serial=nvme-1,bootindex=1
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 2
efidisk0: local-lvm:vm-103-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
ide2: zfs-dir:iso/ubuntu-22.04.2-desktop-amd64.iso,media=cdrom,size=4812096K
memory: 4096
meta: creation-qemu=7.2.0,ctime=1686637883
name: NVMe
net0: virtio=06:86:85:C7:6E:73,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsihw: virtio-scsi-single
smbios1: uuid=39f20000-1099-45b7-ab4f-7a20425ef6d2
sockets: 2
vga: qxl
vmgenid: 516de728-6346-4a37-859a-1ff4a2287af1
 
Hello,

i'd like to move a VM from VMWare Workstation to Proxmox which is configured as a NVME drive:
nvme.png
I moved everything over and wanted to include the NVME-disk with help of the args-flag:
Code:
args: -drive file=/dev/zvol/local_10011/vm-158-disk-0,format=raw,if=none,id=NVME1 -device nvme,drive=NVME1,serial=nvme-1,bootindex=1
agent: 1
boot: order=ide2
cores: 8
cpu: x86-64-v3
ide2: nfs-999-images:iso/virtio-win-0.1.248.iso,media=cdrom,size=715188K
machine: pc-i440fx-9.0
memory: 8192
meta: creation-qemu=9.0.2,ctime=1727934816
name: vm-rs-i3
net0: virtio=BC:24:11:BB:4B:33,bridge=vmbr0,tag=8
numa: 0
ostype: win10
scsihw: virtio-scsi-single
smbios1: uuid=a589e80f-f9dc-41f7-83a4-946468d96c7c
sockets: 1
unused0: local-10011:vm-158-disk-0
vmgenid: f4d08cfa-b2c6-4454-a199-ee1223b525d1

The VM starts without any notice ("TASK OK"). Unfortunately the VM doesn't find a boot device and restarts over and over again. Did i missed a setting?
 
Hi,

Where can I find the syntax for those -drive and -device command ?

I found args definition here

In this file
https://pve.proxmox.com/pve-docs/qm.conf.5.html

Code:
 args: <string>

    Arbitrary arguments passed to kvm, for example:
    args: -no-reboot -smbios type=0,vendor=FOO

EDIT:

OK I think I found it

https://www.qemu.org/docs/master/system/invocation.html

Here are the relevant passages of this discussion


Code:
-device virtio-9p-type,fsdev=id,mount_tag=mount_tag

    Options for virtio-9p-… driver are:

    type

        Specifies the variant to be used. Supported values are “pci”, “ccw” or “device”, depending on the machine type.
    fsdev=id

        Specifies the id value specified along with -fsdev option.
    mount_tag=mount_tag

        Specifies the tag name to be used by the guest to mount this export point.

Code:
-drive option[,option[,option[,...]]]

    Define a new drive. This includes creating a block driver node (the backend) as well as a guest device, and is mostly a shortcut for defining the corresponding -blockdev and -device options.

    -drive accepts all options that are accepted by -blockdev. In addition, it knows the following options:

    file=file

        This option defines which disk image (see the Disk Images chapter in the System Emulation Users Guide) to use with this drive. If the filename contains comma, you must double it (for instance, “file=my,,file” to use file “my,file”).

        Special files such as iSCSI devices can be specified using protocol specific URLs. See the section for “Device URL Syntax” for more information.
    if=interface

        This option defines on which type on interface the drive is connected. Available types are: ide, scsi, sd, mtd, floppy, pflash, virtio, none.
    bus=bus,unit=unit

        These options define where is connected the drive by defining the bus number and the unit id.
    index=index

        This option defines where the drive is connected by using an index in the list of available connectors of a given interface type.
    media=media

        This option defines the type of the media: disk or cdrom.
    snapshot=snapshot

        snapshot is “on” or “off” and controls snapshot mode for the given drive (see -snapshot).
    cache=cache

        cache is “none”, “writeback”, “unsafe”, “directsync” or “writethrough” and controls how the host cache is used to access block data. This is a shortcut that sets the cache.direct and cache.no-flush options (as in -blockdev), and additionally cache.writeback, which provides a default for the write-cache option of block guest devices (as in -device). The modes correspond to the following settings:

       (Table removed see link)

        The default mode is cache=writeback.
    aio=aio

        aio is “threads”, “native”, or “io_uring” and selects between pthread based disk I/O, native Linux AIO, or Linux io_uring API.
    format=format

        Specify which disk format will be used rather than detecting the format. Can be used to specify format=raw to avoid interpreting an untrusted format header.
    werror=action,rerror=action

        Specify which action to take on write and read errors. Valid actions are: “ignore” (ignore the error and try to continue), “stop” (pause QEMU), “report” (report the error to the guest), “enospc” (pause QEMU only if the host disk is full; report the error to the guest otherwise). The default setting is werror=enospc and rerror=report.
    copy-on-read=copy-on-read

        copy-on-read is “on” or “off” and enables whether to copy read backing file sectors into the image file.
    bps=b,bps_rd=r,bps_wr=w

        Specify bandwidth throttling limits in bytes per second, either for all request types or for reads or writes only. Small values can lead to timeouts or hangs inside the guest. A safe minimum for disks is 2 MB/s.
    bps_max=bm,bps_rd_max=rm,bps_wr_max=wm

        Specify bursts in bytes per second, either for all request types or for reads or writes only. Bursts allow the guest I/O to spike above the limit temporarily.
    iops=i,iops_rd=r,iops_wr=w

        Specify request rate limits in requests per second, either for all request types or for reads or writes only.
    iops_max=bm,iops_rd_max=rm,iops_wr_max=wm

        Specify bursts in requests per second, either for all request types or for reads or writes only. Bursts allow the guest I/O to spike above the limit temporarily.
    iops_size=is

        Let every is bytes of a request count as a new request for iops throttling purposes. Use this option to prevent guests from circumventing iops limits by sending fewer but larger requests.
    group=g

        Join a throttling quota group with given name g. All drives that are members of the same group are accounted for together. Use this option to prevent guests from circumventing throttling limits by using many small disks instead of a single larger disk.

    By default, the cache.writeback=on mode is used. It will report data writes as completed as soon as the data is present in the host page cache. This is safe as long as your guest OS makes sure to correctly flush disk caches where needed. If your guest OS does not handle volatile disk write caches correctly and your host crashes or loses power, then the guest may experience data corruption.

    For such guests, you should consider using cache.writeback=off. This means that the host page cache will be used to read and write data, but write notification will be sent to the guest only after QEMU has made sure to flush each write to the disk. Be aware that this has a major impact on performance.

    When using the -snapshot option, unsafe caching is always used.

    Copy-on-read avoids accessing the same backing file sectors repeatedly and is useful when the backing file is over a slow network. By default copy-on-read is off.

    Instead of -cdrom you can use:

    qemu-system-x86_64 -drive file=file,index=2,media=cdrom

    Instead of -hda, -hdb, -hdc, -hdd, you can use:

    qemu-system-x86_64 -drive file=file,index=0,media=disk
    qemu-system-x86_64 -drive file=file,index=1,media=disk
    qemu-system-x86_64 -drive file=file,index=2,media=disk
    qemu-system-x86_64 -drive file=file,index=3,media=disk

    You can open an image using pre-opened file descriptors from an fd set:

    qemu-system-x86_64 \
     -add-fd fd=3,set=2,opaque="rdwr:/path/to/file" \
     -add-fd fd=4,set=2,opaque="rdonly:/path/to/file" \
     -drive file=/dev/fdset/2,index=0,media=disk

    You can connect a CDROM to the slave of ide0:

    qemu-system-x86_64 -drive file=file,if=ide,index=1,media=cdrom

    If you don’t specify the “file=” argument, you define an empty drive:

    qemu-system-x86_64 -drive if=ide,index=1,media=cdrom

    Instead of -fda, -fdb, you can use:

    qemu-system-x86_64 -drive file=file,index=0,if=floppy
    qemu-system-x86_64 -drive file=file,index=1,if=floppy

    By default, interface is “ide” and index is automatically incremented:

    qemu-system-x86_64 -drive file=a -drive file=b

    is interpreted like:

    qemu-system-x86_64 -hda a -hdb b
 
Note:
To figure out the syntax of -device
You will need to run this command as there is no documentation I could find about it

Code:
qemu -device help

example

root@proxmox:~# qemu-system-x86_64 -device help | grep -i nvme
name "nvme", bus PCI, desc "Non-Volatile Memory Express"
name "nvme-ns", bus nvme-bus, desc "Virtual NVMe namespace"
name "nvme-subsys", desc "Virtual NVMe subsystem"

(note: nvme* are part of Storage devices:)