Disk not available when vm is in off state

Prashant Patil

New Member
Feb 20, 2025
11
1
3
How to access the lvm disk when VM is in powered off state? We do not see the disk in lvm, see output below:

root@be-proxmox1:/dev/pve# pvesm path lvm1:vm-105-disk-0
/dev/lvm1/vm-105-disk-0
root@be-proxmox1:/dev/pve#
root@be-proxmox1:/dev/pve# ls -la /dev/lvm1/vm-105-disk-0
ls: cannot access '/dev/lvm1/vm-105-disk-0': No such file or directory
root@be-proxmox1:/dev/pve#

We want to access/read '/dev/lvm1/vm-105-disk-0' when VM is in off state.

Kindly assist.

Best regards,
Prashant
 
Based on the information you giva, this VM is use one of LV(vm-105-disk-0, Logical Volume) in VG, the vm-105-disk-0 is be accessed from the VM(105), and you can use vgs && lvs command to show all Logical Volume on the PVE host. One thing you should to know, every Logical Volume is an raw device, and needs to be format(by mkfs) before can be mount to read/write, I'm not demand either recommend you to format the vm-105-disk-0 because I believe it has been formatted by you VM. But If you want to access the vm-105-disk-0, you should to mount it before, and before you mount it you do better to understand which file system type within vm-105-disk-0, and it is supported by Proxmox VE, other wise you may corrupt it! If you really need to mount it, I recommend you do better clone this VM to another VMID (like 1051) and then do the mount task for vm-1051-disk-0 instead.
 
Hi @Prashant Patil ,

The LVM disks don't usually disappear when the VM is offline. Please examine/show your:
qm config 105
cat /etc/pve/storage.cfg
pvesm status
lsblk
pvs
vgs
lvs

Do use CODE tags when pasting command line output


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Sending the command results as per your request.

Code:
root@be-proxmox1:/var/log/ppatil# qm config 105
bios: ovmf
boot: order=ide0;ide2;net0
cores: 1
cpu: x86-64-v2-AES
ide0: Riyaj-ext4:105/vm-105-disk-0.qcow2,size=32G
ide1: local-lvm:vm-105-disk-0,size=2G
ide2: none,media=cdrom
ide3: local-lvm:vm-105-disk-1,size=2G
machine: pc-q35-9.0
memory: 4096
meta: creation-qemu=9.0.2,ctime=1740733147
name: beprox-vm3
net0: e1000=BC:24:11:09:97:D9,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
sata0: lvm1:vm-105-disk-0,size=1G
scsihw: virtio-scsi-single
smbios1: uuid=accfef86-edf7-423d-91ad-c4cdf1dd38a7
sockets: 1
unused0: zfs1:vm-105-disk-1
unused1: zfs1:vm-105-disk-0
unused3: zfs1:vm-105-disk-2
unused4: Riyaj-ext4:105/vm-105-disk-1.raw
vmgenid: 666e49e9-9851-4639-b3b8-f60be66ded5b
root@be-proxmox1:/var/log/ppatil#


Code:
root@be-proxmox1:/var/log/ppatil# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

dir: Riyaj-ext4
        path /mnt/pve/Riyaj-ext4
        content backup,images,rootdir,iso,snippets,vztmpl
        is_mountpoint 1
        nodes be-proxmox1

dir: Riyaj1-ext4
        path /mnt/pve/Riyaj1-ext4
        content snippets,vztmpl,iso,rootdir,images,backup
        is_mountpoint 1
        nodes be-proxmox2

lvm: lvm1
        vgname lvm1
        content images,rootdir
        nodes be-proxmox1
        shared 0

zfspool: zfs1
        pool zfs1
        content rootdir,images
        mountpoint /zfs1
        nodes be-proxmox1

root@be-proxmox1:/var/log/ppatil#

Code:
root@be-proxmox1:/var/log/ppatil# pvesm status
Name               Type     Status           Total            Used       Available        %
Riyaj-ext4          dir     active      3458258332       209200024      3073313748    6.05%
Riyaj1-ext4         dir   disabled               0               0               0      N/A
local               dir     active        67155344        24898668        38799652   37.08%
local-lvm       lvmthin     active       136544256       127245592         9298663   93.19%
lvm1                lvm     active      1562812416         2097152      1560715264    0.13%
zfs1            zfspool     active      1511784448         7461596      1504322852    0.49%
root@be-proxmox1:/var/log/ppatil#

Code:
root@be-proxmox1:/var/log/ppatil# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                            8:0    0   3.3T  0 disk
└─sda1                         8:1    0   3.3T  0 part /mnt/pve/Riyaj-ext4
sdb                            8:16   0 223.5G  0 disk
├─sdb1                         8:17   0  1007K  0 part
├─sdb2                         8:18   0     1G  0 part /boot/efi
└─sdb3                         8:19   0 222.5G  0 part
  ├─pve-swap                 252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0  65.6G  0 lvm  /
  ├─pve-data_tmeta           252:2    0   1.3G  0 lvm
  │ └─pve-data-tpool         252:4    0 130.2G  0 lvm
  │   ├─pve-data             252:5    0 130.2G  1 lvm
  │   ├─pve-vm--100--disk--0 252:6    0     4M  0 lvm
  │   ├─pve-vm--100--disk--1 252:7    0    50G  0 lvm
  │   ├─pve-vm--100--disk--2 252:8    0     4M  0 lvm
  │   ├─pve-vm--101--disk--0 252:9    0     4M  0 lvm
  │   ├─pve-vm--101--disk--1 252:10   0    50G  0 lvm
  │   ├─pve-vm--101--disk--2 252:11   0     4M  0 lvm
  │   ├─pve-vm--104--disk--0 252:12   0     4M  0 lvm
  │   ├─pve-vm--102--disk--0 252:13   0     4M  0 lvm
  │   ├─pve-vm--102--disk--1 252:14   0    50G  0 lvm
  │   ├─pve-vm--102--disk--2 252:15   0     4M  0 lvm
  │   ├─pve-vm--104--disk--1 252:16   0    50G  0 lvm
  │   ├─pve-vm--104--disk--2 252:17   0     4M  0 lvm
  │   ├─pve-vm--988--disk--0 252:18   0     4M  0 lvm
  │   ├─pve-vm--101--disk--3 252:20   0     1G  0 lvm
  │   ├─pve-vm--105--disk--0 252:21   0     2G  0 lvm
  │   └─pve-vm--105--disk--1 252:22   0     2G  0 lvm
  └─pve-data_tdata           252:3    0 130.2G  0 lvm
    └─pve-data-tpool         252:4    0 130.2G  0 lvm
      ├─pve-data             252:5    0 130.2G  1 lvm
      ├─pve-vm--100--disk--0 252:6    0     4M  0 lvm
      ├─pve-vm--100--disk--1 252:7    0    50G  0 lvm
      ├─pve-vm--100--disk--2 252:8    0     4M  0 lvm
      ├─pve-vm--101--disk--0 252:9    0     4M  0 lvm
      ├─pve-vm--101--disk--1 252:10   0    50G  0 lvm
      ├─pve-vm--101--disk--2 252:11   0     4M  0 lvm
      ├─pve-vm--104--disk--0 252:12   0     4M  0 lvm
      ├─pve-vm--102--disk--0 252:13   0     4M  0 lvm
      ├─pve-vm--102--disk--1 252:14   0    50G  0 lvm
      ├─pve-vm--102--disk--2 252:15   0     4M  0 lvm
      ├─pve-vm--104--disk--1 252:16   0    50G  0 lvm
      ├─pve-vm--104--disk--2 252:17   0     4M  0 lvm
      ├─pve-vm--988--disk--0 252:18   0     4M  0 lvm
      ├─pve-vm--101--disk--3 252:20   0     1G  0 lvm
      ├─pve-vm--105--disk--0 252:21   0     2G  0 lvm
      └─pve-vm--105--disk--1 252:22   0     2G  0 lvm
nbd0                          43:0    0     2G  0 disk
├─nbd0p1                      43:1    0    16M  0 part
└─nbd0p2                      43:2    0     2G  0 part
nbd1                          43:16   0     0B  0 disk
nbd2                          43:32   0     0B  0 disk
nbd3                          43:48   0     0B  0 disk
nbd4                          43:64   0     0B  0 disk
nbd5                          43:80   0     0B  0 disk
nbd6                          43:96   0     0B  0 disk
nbd7                          43:112  0     0B  0 disk
nbd8                          43:128  0     0B  0 disk
nbd9                          43:144  0     0B  0 disk
nbd10                         43:160  0     0B  0 disk
nbd11                         43:176  0     0B  0 disk
nbd12                         43:192  0     0B  0 disk
nbd13                         43:208  0     0B  0 disk
nbd14                         43:224  0     0B  0 disk
nbd15                         43:240  0     0B  0 disk
zd0                          230:0    0     2G  0 disk
└─zd0p1                      230:1    0     2G  0 part
zd16                         230:16   0     3G  0 disk
zd32                         230:32   0     2G  0 disk
└─zd32p1                     230:33   0     2G  0 part
nvme1n1                      259:2    0   1.5T  0 disk
└─lvm1-vm--103--disk--0      252:19   0     1G  0 lvm
nvme0n1                      259:3    0   1.5T  0 disk
├─nvme0n1p1                  259:4    0   1.5T  0 part
└─nvme0n1p9                  259:5    0     8M  0 part
root@be-proxmox1:/var/log/ppatil#

Code:
root@be-proxmox1:/var/log/ppatil# pvs
  PV           VG   Fmt  Attr PSize   PFree
  /dev/nvme1n1 lvm1 lvm2 a--   <1.46t  1.45t
  /dev/sdb3    pve  lvm2 a--  222.50g 16.00g
root@be-proxmox1:/var/log/ppatil#

Code:
root@be-proxmox1:/var/log/ppatil# vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  lvm1   1   2   0 wz--n-  <1.46t  1.45t
  pve    1  46   0 wz--n- 222.50g 16.00g
root@be-proxmox1:/var/log/ppatil#

Code:
  LV                                  VG   Attr       LSize    Pool Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  vm-103-disk-0                       lvm1 -wi-ao----    1.00g
  vm-105-disk-0                       lvm1 -wi-------    1.00g
  data                                pve  twi-aotz-- <130.22g                    93.19  5.15
  root                                pve  -wi-ao----   65.62g
  snap_vm-100-disk-0_Falcon_Base      pve  Vri---tz-k    4.00m data
  snap_vm-100-disk-0_Post_SQL_Install pve  Vri---tz-k    4.00m data
  snap_vm-100-disk-0_Snap1            pve  Vri---tz-k    4.00m data vm-100-disk-0
  snap_vm-100-disk-0_Snap2            pve  Vri---tz-k    4.00m data vm-100-disk-0
  snap_vm-100-disk-0_Snap3            pve  Vri---tz-k    4.00m data vm-100-disk-0
  snap_vm-100-disk-0_Snap33           pve  Vri---tz-k    4.00m data vm-100-disk-0
  snap_vm-100-disk-0_TestSnap52       pve  Vri---tz-k    4.00m data vm-100-disk-0
  snap_vm-100-disk-0_TestSnap88       pve  Vri---tz-k    4.00m data vm-100-disk-0
  snap_vm-100-disk-0_Test_snap2       pve  Vri---tz-k    4.00m data vm-100-disk-0
  snap_vm-100-disk-1_Falcon_Base      pve  Vri---tz-k   50.00g data
  snap_vm-100-disk-1_Post_SQL_Install pve  Vri---tz-k   50.00g data
  snap_vm-100-disk-1_Snap1            pve  Vri---tz-k   50.00g data vm-100-disk-1
  snap_vm-100-disk-1_Snap2            pve  Vri---tz-k   50.00g data vm-100-disk-1
  snap_vm-100-disk-1_Snap3            pve  Vri---tz-k   50.00g data vm-100-disk-1
  snap_vm-100-disk-1_Snap33           pve  Vri---tz-k   50.00g data vm-100-disk-1
  snap_vm-100-disk-1_TestSnap52       pve  Vri---tz-k   50.00g data vm-100-disk-1
  snap_vm-100-disk-1_TestSnap88       pve  Vri---tz-k   50.00g data vm-100-disk-1
  snap_vm-100-disk-1_Test_snap2       pve  Vri---tz-k   50.00g data vm-100-disk-1
  snap_vm-100-disk-2_Falcon_Base      pve  Vri---tz-k    4.00m data
  snap_vm-100-disk-2_Post_SQL_Install pve  Vri---tz-k    4.00m data
  snap_vm-100-disk-2_Snap1            pve  Vri---tz-k    4.00m data vm-100-disk-2
  snap_vm-100-disk-2_Snap2            pve  Vri---tz-k    4.00m data vm-100-disk-2
  snap_vm-100-disk-2_Snap3            pve  Vri---tz-k    4.00m data vm-100-disk-2
  snap_vm-100-disk-2_Snap33           pve  Vri---tz-k    4.00m data vm-100-disk-2
  snap_vm-100-disk-2_TestSnap52       pve  Vri---tz-k    4.00m data vm-100-disk-2
  snap_vm-100-disk-2_TestSnap88       pve  Vri---tz-k    4.00m data vm-100-disk-2
  snap_vm-100-disk-2_Test_snap2       pve  Vri---tz-k    4.00m data vm-100-disk-2
  swap                                pve  -wi-ao----    8.00g
  vm-100-disk-0                       pve  Vwi-aotz--    4.00m data               14.06
  vm-100-disk-1                       pve  Vwi-aotz--   50.00g data               49.61
  vm-100-disk-2                       pve  Vwi-aotz--    4.00m data               1.56
  vm-101-disk-0                       pve  Vwi-aotz--    4.00m data               100.00
  vm-101-disk-1                       pve  Vwi-aotz--   50.00g data               50.35
  vm-101-disk-2                       pve  Vwi-aotz--    4.00m data               1.56
  vm-101-disk-3                       pve  Vwi-aotz--    1.00g data               1.06
  vm-102-disk-0                       pve  Vwi-aotz--    4.00m data               14.06
  vm-102-disk-1                       pve  Vwi-aotz--   50.00g data               49.64
  vm-102-disk-2                       pve  Vwi-aotz--    4.00m data               1.56
  vm-104-disk-0                       pve  Vwi-aotz--    4.00m data               14.06
  vm-104-disk-1                       pve  Vwi-aotz--   50.00g data               52.63
  vm-104-disk-2                       pve  Vwi-aotz--    4.00m data               1.56
  vm-105-disk-0                       pve  Vwi-a-tz--    2.00g data               55.91
  vm-105-disk-1                       pve  Vwi-a-tz--    2.00g data               100.00
  vm-988-disk-0                       pve  Vwi-aotz--    4.00m data               1.56
root@be-proxmox1:/var/log/ppatil#
 
Sending the command results as per your request.

Code:
root@be-proxmox1:/var/log/ppatil# qm config 105
bios: ovmf
boot: order=ide0;ide2;net0
cores: 1
cpu: x86-64-v2-AES
ide0: Riyaj-ext4:105/vm-105-disk-0.qcow2,size=32G
ide1: local-lvm:vm-105-disk-0,size=2G
ide2: none,media=cdrom
ide3: local-lvm:vm-105-disk-1,size=2G
machine: pc-q35-9.0
memory: 4096
meta: creation-qemu=9.0.2,ctime=1740733147
name: beprox-vm3
net0: e1000=BC:24:11:09:97:D9,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
sata0: lvm1:vm-105-disk-0,size=1G
scsihw: virtio-scsi-single
smbios1: uuid=accfef86-edf7-423d-91ad-c4cdf1dd38a7
sockets: 1
unused0: zfs1:vm-105-disk-1
unused1: zfs1:vm-105-disk-0
unused3: zfs1:vm-105-disk-2
unused4: Riyaj-ext4:105/vm-105-disk-1.raw
vmgenid: 666e49e9-9851-4639-b3b8-f60be66ded5b
root@be-proxmox1:/var/log/ppatil#


Code:
root@be-proxmox1:/var/log/ppatil# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

dir: Riyaj-ext4
        path /mnt/pve/Riyaj-ext4
        content backup,images,rootdir,iso,snippets,vztmpl
        is_mountpoint 1
        nodes be-proxmox1

dir: Riyaj1-ext4
        path /mnt/pve/Riyaj1-ext4
        content snippets,vztmpl,iso,rootdir,images,backup
        is_mountpoint 1
        nodes be-proxmox2

lvm: lvm1
        vgname lvm1
        content images,rootdir
        nodes be-proxmox1
        shared 0

zfspool: zfs1
        pool zfs1
        content rootdir,images
        mountpoint /zfs1
        nodes be-proxmox1

root@be-proxmox1:/var/log/ppatil#

Code:
root@be-proxmox1:/var/log/ppatil# pvesm status
Name               Type     Status           Total            Used       Available        %
Riyaj-ext4          dir     active      3458258332       209200024      3073313748    6.05%
Riyaj1-ext4         dir   disabled               0               0               0      N/A
local               dir     active        67155344        24898668        38799652   37.08%
local-lvm       lvmthin     active       136544256       127245592         9298663   93.19%
lvm1                lvm     active      1562812416         2097152      1560715264    0.13%
zfs1            zfspool     active      1511784448         7461596      1504322852    0.49%
root@be-proxmox1:/var/log/ppatil#

Code:
root@be-proxmox1:/var/log/ppatil# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                            8:0    0   3.3T  0 disk
└─sda1                         8:1    0   3.3T  0 part /mnt/pve/Riyaj-ext4
sdb                            8:16   0 223.5G  0 disk
├─sdb1                         8:17   0  1007K  0 part
├─sdb2                         8:18   0     1G  0 part /boot/efi
└─sdb3                         8:19   0 222.5G  0 part
  ├─pve-swap                 252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0  65.6G  0 lvm  /
  ├─pve-data_tmeta           252:2    0   1.3G  0 lvm
  │ └─pve-data-tpool         252:4    0 130.2G  0 lvm
  │   ├─pve-data             252:5    0 130.2G  1 lvm
  │   ├─pve-vm--100--disk--0 252:6    0     4M  0 lvm
  │   ├─pve-vm--100--disk--1 252:7    0    50G  0 lvm
  │   ├─pve-vm--100--disk--2 252:8    0     4M  0 lvm
  │   ├─pve-vm--101--disk--0 252:9    0     4M  0 lvm
  │   ├─pve-vm--101--disk--1 252:10   0    50G  0 lvm
  │   ├─pve-vm--101--disk--2 252:11   0     4M  0 lvm
  │   ├─pve-vm--104--disk--0 252:12   0     4M  0 lvm
  │   ├─pve-vm--102--disk--0 252:13   0     4M  0 lvm
  │   ├─pve-vm--102--disk--1 252:14   0    50G  0 lvm
  │   ├─pve-vm--102--disk--2 252:15   0     4M  0 lvm
  │   ├─pve-vm--104--disk--1 252:16   0    50G  0 lvm
  │   ├─pve-vm--104--disk--2 252:17   0     4M  0 lvm
  │   ├─pve-vm--988--disk--0 252:18   0     4M  0 lvm
  │   ├─pve-vm--101--disk--3 252:20   0     1G  0 lvm
  │   ├─pve-vm--105--disk--0 252:21   0     2G  0 lvm
  │   └─pve-vm--105--disk--1 252:22   0     2G  0 lvm
  └─pve-data_tdata           252:3    0 130.2G  0 lvm
    └─pve-data-tpool         252:4    0 130.2G  0 lvm
      ├─pve-data             252:5    0 130.2G  1 lvm
      ├─pve-vm--100--disk--0 252:6    0     4M  0 lvm
      ├─pve-vm--100--disk--1 252:7    0    50G  0 lvm
      ├─pve-vm--100--disk--2 252:8    0     4M  0 lvm
      ├─pve-vm--101--disk--0 252:9    0     4M  0 lvm
      ├─pve-vm--101--disk--1 252:10   0    50G  0 lvm
      ├─pve-vm--101--disk--2 252:11   0     4M  0 lvm
      ├─pve-vm--104--disk--0 252:12   0     4M  0 lvm
      ├─pve-vm--102--disk--0 252:13   0     4M  0 lvm
      ├─pve-vm--102--disk--1 252:14   0    50G  0 lvm
      ├─pve-vm--102--disk--2 252:15   0     4M  0 lvm
      ├─pve-vm--104--disk--1 252:16   0    50G  0 lvm
      ├─pve-vm--104--disk--2 252:17   0     4M  0 lvm
      ├─pve-vm--988--disk--0 252:18   0     4M  0 lvm
      ├─pve-vm--101--disk--3 252:20   0     1G  0 lvm
      ├─pve-vm--105--disk--0 252:21   0     2G  0 lvm
      └─pve-vm--105--disk--1 252:22   0     2G  0 lvm
nbd0                          43:0    0     2G  0 disk
├─nbd0p1                      43:1    0    16M  0 part
└─nbd0p2                      43:2    0     2G  0 part
nbd1                          43:16   0     0B  0 disk
nbd2                          43:32   0     0B  0 disk
nbd3                          43:48   0     0B  0 disk
nbd4                          43:64   0     0B  0 disk
nbd5                          43:80   0     0B  0 disk
nbd6                          43:96   0     0B  0 disk
nbd7                          43:112  0     0B  0 disk
nbd8                          43:128  0     0B  0 disk
nbd9                          43:144  0     0B  0 disk
nbd10                         43:160  0     0B  0 disk
nbd11                         43:176  0     0B  0 disk
nbd12                         43:192  0     0B  0 disk
nbd13                         43:208  0     0B  0 disk
nbd14                         43:224  0     0B  0 disk
nbd15                         43:240  0     0B  0 disk
zd0                          230:0    0     2G  0 disk
└─zd0p1                      230:1    0     2G  0 part
zd16                         230:16   0     3G  0 disk
zd32                         230:32   0     2G  0 disk
└─zd32p1                     230:33   0     2G  0 part
nvme1n1                      259:2    0   1.5T  0 disk
└─lvm1-vm--103--disk--0      252:19   0     1G  0 lvm
nvme0n1                      259:3    0   1.5T  0 disk
├─nvme0n1p1                  259:4    0   1.5T  0 part
└─nvme0n1p9                  259:5    0     8M  0 part
root@be-proxmox1:/var/log/ppatil#

Code:
root@be-proxmox1:/var/log/ppatil# pvs
  PV           VG   Fmt  Attr PSize   PFree
  /dev/nvme1n1 lvm1 lvm2 a--   <1.46t  1.45t
  /dev/sdb3    pve  lvm2 a--  222.50g 16.00g
root@be-proxmox1:/var/log/ppatil#
Block Blast
Code:
root@be-proxmox1:/var/log/ppatil# vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  lvm1   1   2   0 wz--n-  <1.46t  1.45t
  pve    1  46   0 wz--n- 222.50g 16.00g
root@be-proxmox1:/var/log/ppatil#

Code:
  LV                                  VG   Attr       LSize    Pool Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  vm-103-disk-0                       lvm1 -wi-ao----    1.00g
  vm-105-disk-0                       lvm1 -wi-------    1.00g
  data                                pve  twi-aotz-- <130.22g                    93.19  5.15
  root                                pve  -wi-ao----   65.62g
  snap_vm-100-disk-0_Falcon_Base      pve  Vri---tz-k    4.00m data
  snap_vm-100-disk-0_Post_SQL_Install pve  Vri---tz-k    4.00m data
  snap_vm-100-disk-0_Snap1            pve  Vri---tz-k    4.00m data vm-100-disk-0
  snap_vm-100-disk-0_Snap2            pve  Vri---tz-k    4.00m data vm-100-disk-0
  snap_vm-100-disk-0_Snap3            pve  Vri---tz-k    4.00m data vm-100-disk-0
  snap_vm-100-disk-0_Snap33           pve  Vri---tz-k    4.00m data vm-100-disk-0
  snap_vm-100-disk-0_TestSnap52       pve  Vri---tz-k    4.00m data vm-100-disk-0
  snap_vm-100-disk-0_TestSnap88       pve  Vri---tz-k    4.00m data vm-100-disk-0
  snap_vm-100-disk-0_Test_snap2       pve  Vri---tz-k    4.00m data vm-100-disk-0
  snap_vm-100-disk-1_Falcon_Base      pve  Vri---tz-k   50.00g data
  snap_vm-100-disk-1_Post_SQL_Install pve  Vri---tz-k   50.00g data
  snap_vm-100-disk-1_Snap1            pve  Vri---tz-k   50.00g data vm-100-disk-1
  snap_vm-100-disk-1_Snap2            pve  Vri---tz-k   50.00g data vm-100-disk-1
  snap_vm-100-disk-1_Snap3            pve  Vri---tz-k   50.00g data vm-100-disk-1
  snap_vm-100-disk-1_Snap33           pve  Vri---tz-k   50.00g data vm-100-disk-1
  snap_vm-100-disk-1_TestSnap52       pve  Vri---tz-k   50.00g data vm-100-disk-1
  snap_vm-100-disk-1_TestSnap88       pve  Vri---tz-k   50.00g data vm-100-disk-1
  snap_vm-100-disk-1_Test_snap2       pve  Vri---tz-k   50.00g data vm-100-disk-1
  snap_vm-100-disk-2_Falcon_Base      pve  Vri---tz-k    4.00m data
  snap_vm-100-disk-2_Post_SQL_Install pve  Vri---tz-k    4.00m data
  snap_vm-100-disk-2_Snap1            pve  Vri---tz-k    4.00m data vm-100-disk-2
  snap_vm-100-disk-2_Snap2            pve  Vri---tz-k    4.00m data vm-100-disk-2
  snap_vm-100-disk-2_Snap3            pve  Vri---tz-k    4.00m data vm-100-disk-2
  snap_vm-100-disk-2_Snap33           pve  Vri---tz-k    4.00m data vm-100-disk-2
  snap_vm-100-disk-2_TestSnap52       pve  Vri---tz-k    4.00m data vm-100-disk-2
  snap_vm-100-disk-2_TestSnap88       pve  Vri---tz-k    4.00m data vm-100-disk-2
  snap_vm-100-disk-2_Test_snap2       pve  Vri---tz-k    4.00m data vm-100-disk-2
  swap                                pve  -wi-ao----    8.00g Block Blast
  vm-100-disk-0                       pve  Vwi-aotz--    4.00m data               14.06
  vm-100-disk-1                       pve  Vwi-aotz--   50.00g data               49.61
  vm-100-disk-2                       pve  Vwi-aotz--    4.00m data               1.56
  vm-101-disk-0                       pve  Vwi-aotz--    4.00m data               100.00
  vm-101-disk-1                       pve  Vwi-aotz--   50.00g data               50.35
  vm-101-disk-2                       pve  Vwi-aotz--    4.00m data               1.56
  vm-101-disk-3                       pve  Vwi-aotz--    1.00g data               1.06
  vm-102-disk-0                       pve  Vwi-aotz--    4.00m data               14.06
  vm-102-disk-1                       pve  Vwi-aotz--   50.00g data               49.64
  vm-102-disk-2                       pve  Vwi-aotz--    4.00m data               1.56
  vm-104-disk-0                       pve  Vwi-aotz--    4.00m data               14.06
  vm-104-disk-1                       pve  Vwi-aotz--   50.00g data               52.63
  vm-104-disk-2                       pve  Vwi-aotz--    4.00m data               1.56
  vm-105-disk-0                       pve  Vwi-a-tz--    2.00g data               55.91
  vm-105-disk-1                       pve  Vwi-a-tz--    2.00g data               100.00
  vm-988-disk-0                       pve  Vwi-aotz--    4.00m data               1.56
root@be-proxmox1:/var/log/ppatil#
Why do you distribute the items so small? Please give a specific goal that you want first.
 
Last edited:
I assume this is in the context of creating VM backups? you just found one of the reasons why PVE will start currently stopped VMs in a suspended state in order to take the backup - all the volume activation and access logic is already done that way. for some storage types, accessing without the qemu block layer can be quite tricky (e.g. for Ceph there is a kernel client, but it supports different features than the userspace one).
 
Yes, this is in context with backup of offline VMs. So how do we take VMs in suspended state? Once in this state, how do we get consistent copy of the VM?

Regards
Prashant
 
it feels a bit like we are discussing this in two places now, maybe we can focus on the devel list?

anyway - the "start paused" mode is not exposed at the moment, since it is only used by vzdump internally. you can get the same effect by setting the "freeze" option in the VM config. in both cases, the VM process is started, but with "-S" passed to Qemu to pause (or rather, never start) the execution of the guest. you can still interact with the VM over QMP, and once you set up the Copy-before-Write nodes you can continue booting the VM (this is what we do in a "stop" mode backup if the VM was previously running and shut down for the backup) or leave it in the frozen state until you stop it again when the backup is done.

the way we currently do the backup is quite heavily tied to Qemu internals including some custom patches we have on top of upstream, the backup provider API @fiona is working on serves exactly the purpose of exposing it in a safe and structured fashion for external backup providers, so that you don't need to worry about ensuring consistency and managing Qemu internals, but just about implementing the glue code between raw (partial) image data on the PVE side, and whatever you have on the other end for storing the backed up data.
 
Yes, this is in context with backup of offline VMs. So how do we take VMs in suspended state? Once in this state, how do we get consistent copy of the VM?
I did not realize that this pursuit was in the context of a backup/recovery flow.

As @fabian mentioned, not all storage type have disk images available on all nodes at all times. In fact, using local-lvm as your baseline is not ideal. Presumably, you are targeting enterprise clientele with your backup product. Those customers are much more likely to use some sort of shared storage solution.
The choices here: a) closely follow Backup API development and use this abstraction; b) develop optimized path for most popular storage solutions; c) develop custom solution; d) some combination of a-c


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
it feels a bit like we are discussing this in two places now, maybe we can focus on the devel list?

anyway - the "start paused" mode is not exposed at the moment, since it is only used by vzdump internally. you can get the same effect by setting the "freeze" option in the VM config. in both cases, the VM process is started, but with "-S" passed to Qemu to pause (or rather, never start) the execution of the guest. you can still interact with the VM over QMP, and once you set up the Copy-before-Write nodes you can continue booting the VM (this is what we do in a "stop" mode backup if the VM was previously running and shut down for the backup) or leave it in the frozen state until you stop it again when the backup is done.

the way we currently do the backup is quite heavily tied to Qemu internals including some custom patches we have on top of upstream, the backup provider API @fiona is working on serves exactly the purpose of exposing it in a safe and structured fashion for external backup providers, so that you don't need to worry about ensuring consistency and managing Qemu internals, but just about implementing the glue code between raw (partial) image data on the PVE side, and whatever you have on the other end for storing the backed up data.
I did "POST https://<ip-address>:8006/api2/json/nodes/be-proxmox1/qemu/105/config?freeze=1", this was successful but I do not see qmp instance in '/var/run/qemu-server/' directory. I can see qmp instance when freeze is set to 1 and then when we start the VM. So are these valid steps?
 
Last edited:
yes, QMP is only available if a Qemu process is running.