how do you map a proxmox virtual disk to a vm disk?

genfoch01

New Member
Oct 4, 2023
12
2
3
I have created a vm with 5 disks, 4 of them are the same size.

on proxmox vm hardware they show up like this
1745761278470.png
the details look like this ( highlighted disk )
1745761415748.png

on my vm using
ls -l /dev/disk/by-id/ i get this

Code:
root@openmediavault:~# ls -l /dev/disk/by-id/
total 0
lrwxrwxrwx 1 root root  9 Apr 26 16:38 ata-QEMU_DVD-ROM_QM00003 -> ../../sr0
lrwxrwxrwx 1 root root  9 Apr 26 16:38 ata-QEMU_HARDDISK_QM00005 -> ../../sda
lrwxrwxrwx 1 root root 10 Apr 26 16:38 ata-QEMU_HARDDISK_QM00005-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Apr 26 16:38 ata-QEMU_HARDDISK_QM00005-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Apr 26 16:38 ata-QEMU_HARDDISK_QM00005-part5 -> ../../sda5
lrwxrwxrwx 1 root root  9 Apr 26 16:38 ata-QEMU_HARDDISK_QM00007 -> ../../sdb
lrwxrwxrwx 1 root root 10 Apr 26 16:38 ata-QEMU_HARDDISK_QM00007-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Apr 26 16:38 ata-QEMU_HARDDISK_QM00007-part9 -> ../../sdb9
lrwxrwxrwx 1 root root  9 Apr 26 16:38 ata-QEMU_HARDDISK_QM00009 -> ../../sdc
lrwxrwxrwx 1 root root 10 Apr 26 16:38 ata-QEMU_HARDDISK_QM00009-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Apr 26 16:38 ata-QEMU_HARDDISK_QM00009-part9 -> ../../sdc9
lrwxrwxrwx 1 root root  9 Apr 26 16:38 ata-QEMU_HARDDISK_QM00011 -> ../../sdd
lrwxrwxrwx 1 root root 10 Apr 26 16:38 ata-QEMU_HARDDISK_QM00011-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 10 Apr 26 16:38 ata-QEMU_HARDDISK_QM00011-part9 -> ../../sdd9
lrwxrwxrwx 1 root root  9 Apr 26 16:38 ata-QEMU_HARDDISK_QM00013 -> ../../sde
lrwxrwxrwx 1 root root 10 Apr 26 16:38 ata-QEMU_HARDDISK_QM00013-part1 -> ../../sde1
lrwxrwxrwx 1 root root 10 Apr 26 16:38 ata-QEMU_HARDDISK_QM00013-part9 -> ../../sde9

but I can't see a way to map QEMU_HARDDISK_QM00005 -> ../../sda to a specific hard disk as defined in proxmox.
how can I do this?

thanks
-GF
 
You mean you want to have them named sata instead of sda? That’s just what the OS naming conventions are, in the case of most modern Linux, this may be changed by udev rules.
 
I believe OP wants to visually/mentally map (figure out) whether SDA is sata0, or sata1.
In general, they should be in order.

@genfoch01 if you have QEMU agent installed you can run: qm guest cmd 3000 get-fsinfo


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
you are correct I want to map the drives as the vm sees them to the drives I see assigned to the vm by proxmox. example testing raid I ran
Code:
echo offline > /sys/block/sdb/device/state
echo 1 > /sys/block/sdb/device/delete

so Now i have
Code:
root@openmediavault:~# zpool status diskpool
  pool: diskpool
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
config:

        NAME                           STATE     READ WRITE CKSUM
        diskpool                       DEGRADED     0     0     0
          raidz2-0                     DEGRADED     0     0     0
            ata-QEMU_HARDDISK_QM00007  OFFLINE      0     0     0
            ata-QEMU_HARDDISK_QM00009  ONLINE       0     0     0
            ata-QEMU_HARDDISK_QM00011  ONLINE       0     0     0
            ata-QEMU_HARDDISK_QM00013  ONLINE       0     0     0

but which drive ( from proxmox ) is ata-QEMU_HARDDISK_QM00007 ?

I don't have QEMU agent installed as I just used the Openmediavault ISO. though it is built on Debian (bookworm ) so maybe I can just install it.
 
Why do you use SATA? Also why put ZFS on top of a QCOW2 disk? Anyways, the serial of SCSI disks can be matched easily to the node. Note the number(s).
Bash:
# lsblk -do+FSTYPE,LABEL,VENDOR,MODEL,SERIAL
NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS FSTYPE LABEL VENDOR   MODEL         SERIAL
sda    8:0    0   12G  0 disk                          ATA      QEMU HARDDISK QM00005
sdb    8:16   0   20G  0 disk                          QEMU     QEMU HARDDISK drive-scsi0

I didn't experiment too much but it looks like the SATA disks can be checked like this.
Bash:
# ls -l /dev/disk/by-path/*ata* | grep "sd"
lrwxrwxrwx 1 root root 9 Apr 27 22:44 /dev/disk/by-path/pci-0000:00:07.0-ata-1 -> ../../sdc
lrwxrwxrwx 1 root root 9 Apr 27 22:44 /dev/disk/by-path/pci-0000:00:07.0-ata-1.0 -> ../../sdc
lrwxrwxrwx 1 root root 9 Apr 27 22:44 /dev/disk/by-path/pci-0000:00:07.0-ata-2 -> ../../sdb
lrwxrwxrwx 1 root root 9 Apr 27 22:44 /dev/disk/by-path/pci-0000:00:07.0-ata-2.0 -> ../../sdb

udevadm info /dev/sdX can also be helpful here.


Note that qm guest cmd XXX get-fsinfo only works for mounted file systems as the name suggests. Its output looks like this.
Bash:
# qm guest cmd 101 get-fsinfo
[
  {
    "disk": [
      {
        "bus": 0,
        "bus-type": "sata",
        "dev": "/dev/sdb1",
        "pci-controller": {
          "bus": 0,
          "domain": 0,
          "function": 0,
          "slot": 7
        },
        "serial": "QEMU_HARDDISK_QM00005",
        "target": 0,
        "unit": 0
      }
    ],
    "mountpoint": "/mnt/sata",
    "name": "sdb1",
    "total-bytes": 12913811456,
    "type": "ext4",
    "used-bytes": 24576
  },
  {
    "disk": [
      {
        "bus": 0,
        "bus-type": "scsi",
        "dev": "/dev/sda1",
        "pci-controller": {
          "bus": 1,
          "domain": 0,
          "function": 0,
          "slot": 1
        },
        "serial": "0QEMU_QEMU_HARDDISK_drive-scsi0",
        "target": 0,
        "unit": 0
      }
    ],
    "mountpoint": "/",
    "name": "sda1",
    "total-bytes": 18908688384,
    "type": "ext4",
    "used-bytes": 3598905344
  }
]
 
Last edited:
  • Like
Reactions: Kingneutron
Why do you use SATA? Also why put ZFS on top of a QCOW2 disk? Anyways, the serial of SCSI disks can be matched easily to the node. Note the number(s).


[/CODE]
I am a bit new to proxmox ( I was using virtualbox ) so I have a learning curve and don't understand, yet, how/why to choose SCSI over SATA.

I am using ZFS because this is an OpenMedaiVault test vm and I wanted to set up zfs on OMV the same way it would be set up on physical hardware as that is where it will ultimately reside if I find that OMV can do everything I need.

Thanks for taking the time to help me out. I'll try out your suggestions and reply back with results
-GF
 
Ok I can match /dev/sdX with ATA path and ata-QEMU_HARDDISK_XXX using udevadm

Code:
root@openmediavault:~# udevadm info /dev/sdc
P: /devices/pci0000:00/0000:00:07.0/ata5/host4/target4:0:0/4:0:0:0/block/sdc
M: sdc
U: block
T: disk
D: b 8:32
N: sdc
L: 0
S: disk/by-path/pci-0000:00:07.0-ata-3
S: disk/by-diskseq/13
S: disk/by-id/ata-QEMU_HARDDISK_QM00009
S: disk/by-path/pci-0000:00:07.0-ata-3.0

but that information is internal to the vm, I want to match it to the proxmox disk
1745887399369.png

but I don't see a way to do that. it really seems like there should be a way to figure out what disk the vm sees and match it with the disk proxmox assigned to it. I see I can do this manually by documenting each disk as I add it, but that also means I'd need to boot the system once for each drive (ok not if they are hot pluggable but my drives don't seem to be.

is there something obvious I'm missing ?
thanks again,
GF
 
for disk types that support it, you can set properties like model, serial and wwn to make identification inside the guest easier....
 
Here is a procedure one can use:

a) Enter QM monitor mode: qm monitor 3000
b) List block devices
Code:
info block

drive-sata0 (#block781): /dev/bbpve/bb-iscsi:vm-3000-disk-2 (raw)
    Attached to:      sata0
    Cache mode:       writeback, direct
    Detect zeroes:    on

drive-sata1 (#block916): /dev/bbpve/bb-iscsi:vm-3000-disk-3 (raw)
    Attached to:      sata1
    Cache mode:       writeback, direct
    Detect zeroes:    on

drive-sata2 (#block1195): /dev/bbpve/bb-iscsi:vm-3000-disk-4 (raw)
    Attached to:      sata2
    Cache mode:       writeback, direct
    Detect zeroes:    on

drive-sata3 (#block1350): /dev/bbpve/bb-iscsi:vm-3000-disk-5 (raw)
    Attached to:      sata3
    Cache mode:       writeback, direct

c) correlate to UI or CLI output
Code:
 qm config 3000|grep sata
sata0: bb-iscsi:vm-3000-disk-2,size=32G
sata1: bb-iscsi:vm-3000-disk-3,size=32G
sata2: bb-iscsi:vm-3000-disk-4,size=32G
sata3: bb-iscsi:vm-3000-disk-5,size=32G



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
I am a bit new to proxmox ( I was using virtualbox ) so I have a learning curve and don't understand, yet, how/why to choose SCSI over SATA.
It is faster.

Btw, with using SCSI you could run lsblk -o PATH,SERIAL,HCTLin the vm and the last digit in HCTL output should correspond to the scsiX in config of vm, so you could get the link between serial and port in a single command.
 
Here is a procedure one can use:

a) Enter QM monitor mode: qm monitor 3000
b) List block devices
Code:
info block

drive-sata0 (#block781): /dev/bbpve/bb-iscsi:vm-3000-disk-2 (raw)
    Attached to:      sata0
    Cache mode:       writeback, direct
    Detect zeroes:    on

drive-sata1 (#block916): /dev/bbpve/bb-iscsi:vm-3000-disk-3 (raw)
    Attached to:      sata1
    Cache mode:       writeback, direct
    Detect zeroes:    on

drive-sata2 (#block1195): /dev/bbpve/bb-iscsi:vm-3000-disk-4 (raw)
    Attached to:      sata2
    Cache mode:       writeback, direct
    Detect zeroes:    on

drive-sata3 (#block1350): /dev/bbpve/bb-iscsi:vm-3000-disk-5 (raw)
    Attached to:      sata3
    Cache mode:       writeback, direct

c) correlate to UI or CLI output
Code:
 qm config 3000|grep sata
sata0: bb-iscsi:vm-3000-disk-2,size=32G
sata1: bb-iscsi:vm-3000-disk-3,size=32G
sata2: bb-iscsi:vm-3000-disk-4,size=32G
sata3: bb-iscsi:vm-3000-disk-5,size=32G



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
is the QM monitor part of the Qemu agent and if so is this the correct way to install it?

Code:
apt-get install qemu-guest-agent
assuming it is I'd then need to "enable" it and start it (I think)

-GF
 
is the QM monitor part of the Qemu agent and if so is this the correct way to install it?
It is part of standard PVE package. "qm" is native PVE command and does not depend on the presence of the agent inside the VM.
"monitor" is just an option to QM.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
In the GUI click on the VM, then on the side there is Monitor.


Code:
Type 'help' for help.
# info block
pflash0 (#block168): /usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd (raw, read-only)
    Attached to:      /machine/system.flash0
    Cache mode:       writeback

drive-efidisk0 (#block311): json:{"driver": "raw", "size": "540672", "file": {"driver": "host_device", "filename": "/dev/rbd-pve/eb14b165-42bb-4c2b-8da5-a17a969bdafb/ceph-data/vm-118-disk-0"}} (raw)
    Attached to:      /machine/system.flash1
    Cache mode:       writeback

drive-virtio0 (#block573): /dev/rbd-pve/eb14b165-42bb-4c2b-8da5-a17a969bdafb/ceph-data/vm-118-disk-2 (raw)
    Attached to:      /machine/peripheral/virtio0/virtio-backend
    Cache mode:       writeback
    Detect zeroes:    unmap
 
  • Like
Reactions: bbgeek17
In the GUI click on the VM, then on the side there is Monitor.


Code:
Type 'help' for help.
# info block
pflash0 (#block168): /usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd (raw, read-only)
    Attached to:      /machine/system.flash0
    Cache mode:       writeback

drive-efidisk0 (#block311): json:{"driver": "raw", "size": "540672", "file": {"driver": "host_device", "filename": "/dev/rbd-pve/eb14b165-42bb-4c2b-8da5-a17a969bdafb/ceph-data/vm-118-disk-0"}} (raw)
    Attached to:      /machine/system.flash1
    Cache mode:       writeback

drive-virtio0 (#block573): /dev/rbd-pve/eb14b165-42bb-4c2b-8da5-a17a969bdafb/ceph-data/vm-118-disk-2 (raw)
    Attached to:      /machine/peripheral/virtio0/virtio-backend
    Cache mode:       writeback
    Detect zeroes:    unmap
ok i get
Code:
# info block
drive-ide2: [not inserted]
    Attached to:      ide2
    Removable device: not locked, tray closed

drive-sata0 (#block103): /mnt/pve/ssd_1TB/images/110/vm-110-disk-0.qcow2 (qcow2)
    Attached to:      sata0
    Cache mode:       writeback, direct
    Detect zeroes:    on

drive-sata1 (#block324): /mnt/pve/hdd_1TB/images/110/vm-110-disk-0.qcow2 (qcow2)
    Attached to:      sata1
    Cache mode:       writeback, direct
    Detect zeroes:    on

drive-sata2 (#block540): /mnt/pve/hdd_1TB/images/110/vm-110-disk-1.qcow2 (qcow2)
    Attached to:      sata2
    Cache mode:       writeback, direct
    Detect zeroes:    on

drive-sata3 (#block736): /mnt/pve/hdd_1TB/images/110/vm-110-disk-2.qcow2 (qcow2)
    Attached to:      sata3
    Cache mode:       writeback, direct
    Detect zeroes:    on

drive-sata4 (#block966): /mnt/pve/hdd_1TB/images/110/vm-110-disk-3.qcow2 (qcow2)
    Attached to:      sata4
    Cache mode:       writeback, direct
    Detect zeroes:    on


but not sure how to map that to

Code:
root@openmediavault:~# lsblk -S -x hctl -o name,hctl,serial
NAME HCTL       SERIAL
sr0  1:0:0:0    QM00003
sda  2:0:0:0    QM00005
sdb  3:0:0:0    QM00007
sdc  4:0:0:0    QM00009
sdd  5:0:0:0    QM00011
sde  6:0:0:0    QM00013
 
In VM:
Code:
for disk in /dev/sd?; do    echo "$disk: $(udevadm info --query=all --name=$disk | grep ID_SERIAL=)"; done
/dev/sda: E: ID_SERIAL=0QEMU_QEMU_HARDDISK_drive-scsi0
/dev/sdb: E: ID_SERIAL=0QEMU_QEMU_HARDDISK_drive-scsi2
/dev/sdc: E: ID_SERIAL=QEMU_HARDDISK_QM00013
/dev/sdd: E: ID_SERIAL=QEMU_HARDDISK_QM00015
/dev/sde: E: ID_SERIAL=QEMU_HARDDISK_QM00017
/dev/sdf: E: ID_SERIAL=QEMU_HARDDISK_QM00019

On the host:
typescript
qm monitor 3000
info qtree
ctrl-d
ctrl-d

Code:
root@pve-2:~# grep  -B20 QM00019 typescript
                bus: ahci0.4
                  type IDE
                bus: ahci0.3
                  type IDE
                  dev: ide-hd, id "sata3"
                    drive = "drive-sata3"
                    backend_defaults = "auto"
                    logical_block_size = 512 (512 B)
                    physical_block_size = 512 (512 B)
                    min_io_size = 0 (0 B)
                    opt_io_size = 0 (0 B)
                    discard_granularity = 512 (512 B)
                    write-cache = "auto"
                    share-rw = false
                    account-invalid = "auto"
                    account-failed = "auto"
                    rerror = "auto"
                    werror = "auto"
                    ver = "2.5+"
                    wwn = 0 (0x0)
                    serial = "QM00019"



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox