panic on boot trying to mount iscsi volume

sbs

Member
Mar 30, 2021
3
0
6
50
Hello,

I am new to proxmox, so I probably do the things in the wrong order.
I have added to proxmox a iscsi volume. Mounted a LVM on it with an ext4 FS, and I can mount it fine.
But I tried to insert it in fstab, so that it gets mounted at boot and VM using the FS can be started as well.

This failed epically with a boot panic when VM tries to mount the things that are most likely not started/inited.
I get error messages like :
Mar 26 18:10:40 srvr-vm01 pve-guests[1150]: volume 'LVM-DS-FS:104/vm-104-disk-0.raw' does not exist
And LVM seems to kick in and activate the iscsi volume afterwards.
(I deactivated the fstab entry and system boots fine and iscsi volume runs fine)

How am I supposed to get the iscsi volume started and mounted before the vm try starting?
 
Hello,

I am new to proxmox, so I probably do the things in the wrong order.
I have added to proxmox a iscsi volume. Mounted a LVM on it with an ext4 FS, and I can mount it fine.
But I tried to insert it in fstab, so that it gets mounted at boot and VM using the FS can be started as well.

This failed epically with a boot panic when VM tries to mount the things that are most likely not started/inited.
I get error messages like :
Mar 26 18:10:40 srvr-vm01 pve-guests[1150]: volume 'LVM-DS-FS:104/vm-104-disk-0.raw' does not exist
And LVM seems to kick in and activate the iscsi volume afterwards.
(I deactivated the fstab entry and system boots fine and iscsi volume runs fine)

How am I supposed to get the iscsi volume started and mounted before the vm try starting?
The recommended way is to define in pve firs an iscsi storage an then of top of it an LVM one. To add somethinǵ for it (in host's) fstab is not necessary.
For clarification it would be useful to have look into storage and VM configuration; when running
Code:
pvereport > pvereport.log
both will be logged into pvereport.log
 
I agree that declaring the volume in pve and the LVM on top of it works and does not require any fstab entry. However, from what I understood, this only works when using the volume in raw for a single VM (thus using the full volume).

I created an ext4 fs and mounted it so I can use qcow images on this volume. That's why I have been mounting the volume.

Regarding the pve report, should I post it in full here? or just some sections?
 
I agree that declaring the volume in pve and the LVM on top of it works and does not require any fstab entry. However, from what I understood, this only works when using the volume in raw for a single VM (thus using the full volume).

I created an ext4 fs and mounted it so I can use qcow images on this volume. That's why I have been mounting the volume.
In other words qcow2 file in an LVM which is located in an iscsi device? It is an usual approach, however, should be possible. That means the storage is simply type "Directory" and the respective filesystem has to be mounted accordingly.
If it works with fstab depends on the iscsi device it it is recognized at boot time - obviously it is not, this has be checked first. Or you let run a small script which does the mount after boot.

Regarding the pve report, should I post it in full here? or just some sections?

For this topic relevant are:
* Storage definition
* VM configuration
* current mount points
* lsblk output
* /dev/disk/by-* directory
 
Ok,

Below is the information.
Let me know if I missed anything.

Regards

Code:
* Storage definition :

==== info about storage ====

# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content backup,vztmpl,iso

lvmthin: local-lvm
    thinpool data
    vgname pve
    content images,rootdir

iscsi: DS-LUN-VM
    portal ds
    target iqn.2000-01.com.synology:DS.Target-1.4bd89287f9
    content images

lvm: LVM-DS
    vgname LVM-DS
    base DS-LUN-VM:0.0.0.scsi-3600140561d96697dad85d49e5d8f09de
    content rootdir,images
    shared 0

dir: LVM-DS-FS
    path /mnt/LVM-DS
    content rootdir,images,backup,vztmpl,iso,snippets
    prune-backups keep-all=1
    shared 0


# pvesm status
Name             Type     Status           Total            Used       Available        %
DS-LUN-VM       iscsi     active               0               0               0    0.00%
LVM-DS            lvm     active      1048571904      1048571904               0  100.00%
LVM-DS-FS         dir     active      1031065752       105449904       873170872   10.23%
local             dir     active        98559220        35843648        57666024   36.37%
local-lvm     lvmthin     active       833396736        23501787       809894948    2.82%

# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
#/dev/LVM-DS/LVM-DS /mnt/LVM-DS ext4


# df --human
Filesystem                   Size  Used Avail Use% Mounted on
udev                          32G     0   32G   0% /dev
tmpfs                        6.3G   18M  6.3G   1% /run
/dev/mapper/pve-root          94G   35G   55G  39% /
tmpfs                         32G   43M   32G   1% /dev/shm
tmpfs                        5.0M     0  5.0M   0% /run/lock
tmpfs                         32G     0   32G   0% /sys/fs/cgroup
/dev/fuse                     30M   20K   30M   1% /etc/pve
/dev/mapper/LVM--DS-LVM--DS  984G  101G  833G  11% /mnt/LVM-DS
tmpfs                        6.3G     0  6.3G   0% /run/user/0


* VM Configuration

==== info about virtual guests ====

# qm list
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID       
       100 TestWeeWx            stopped    2048              32.00 0         
       101 VM 101               stopped    2048               0.00 0         
       102 VM 102               stopped    2048               0.00 0         
       103 srvr-dc01            stopped    2048               0.00 0         
       104 srvr-dc01.intest.info running    4096               0.00 19907     

# cat /etc/pve/qemu-server/104.conf
balloon: 1024
boot: order=scsi0
cores: 2
ide2: none,media=cdrom
memory: 4096
name: srvr-dc01.intest.info
net0: e1000=EA:F0:90:BC:07:34,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: LVM-DS-FS:104/vm-104-disk-0.raw
smbios1: uuid=dce6c2b3-c811-47c4-b255-4a2615963d02
sockets: 1
vmgenid: 125518d0-8a47-4cef-b497-2f3bd32ea175


# cat /etc/pve/qemu-server/102.conf
boot: order=ide2;ide0;net0
cores: 1
ide0: LVM-DS-FS:102/vm-102-disk-1.qcow2,size=133167120K
ide2: local:iso/AcronisBackupPC_11.7_50230_en-EU.iso,media=cdrom,size=351424K
memory: 2048
net0: e1000=16:13:A4:16:C1:46,bridge=vmbr0,firewall=1
numa: 0
ostype: wxp
smbios1: uuid=c01549de-fffa-49e5-8c0d-b522f65f3a7e
sockets: 1
vmgenid: 13ed743a-ad53-4063-8b10-a9355712dc8e
vmstatestorage: local-lvm


# cat /etc/pve/qemu-server/100.conf
boot: order=scsi0;ide2;net0
cores: 2
ide2: none,media=cdrom
memory: 2048
name: TestWeeWx
net0: virtio=DA:28:D8:04:28:1A,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-100-disk-0,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=9bebf683-7547-41a1-aa42-669663770ff1
sockets: 1
vmgenid: 27e8f718-b784-4835-90ba-a31a5843c8bd


# cat /etc/pve/qemu-server/103.conf
boot: order=ide2;scsi1;net0
cores: 2
ide2: LVM-DS-FS:iso/turnkey-domain-controller-16.0-buster-amd64.iso,media=cdrom,size=362M
localtime: 0
memory: 2048
name: srvr-dc01
net0: e1000=CA:01:22:16:4A:6E,bridge=vmbr0,firewall=1,link_down=1
net1: e1000=FE:0B:ED:1E:2B:C2,bridge=vmbr0,firewall=1
numa: 0
ostype: wxp
scsi1: LVM-DS-FS:103/vm-103-disk-0.qcow2,size=64G
smbios1: uuid=3cae8da6-4102-4930-932c-0623c9ffebf0
sockets: 1
vmgenid: e2c77a72-0eb5-4916-b66c-e8c02fae60a2


# cat /etc/pve/qemu-server/101.conf
boot: order=ide2;ide0;net0
cores: 2
ide0: local-lvm:vm-101-disk-1,size=32G
ide2: none,media=cdrom
memory: 2048
net0: rtl8139=BA:B1:3D:3C:85:13,bridge=vmbr0,firewall=1
numa: 0
ostype: wxp
smbios1: uuid=ee76500d-816f-449c-881c-19707bd4471c
sockets: 1
unused0: local-lvm:vm-101-disk-0
vcpus: 2
vmgenid: 1cc51c42-78d9-42de-92a6-e78d14a23f28

* LSBLK et LSDiskById

==== info about disks ====

# lsblk --ascii
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    1  14.9G  0 disk
`-sda1                         8:1    1  14.9G  0 part
sdb                            8:16   0  1000G  0 disk
`-LVM--DS-LVM--DS            253:9    0  1000G  0 lvm  /mnt/LVM-DS
nvme0n1                      259:0    0 931.5G  0 disk
|-nvme0n1p1                  259:1    0  1007K  0 part
|-nvme0n1p2                  259:2    0   512M  0 part
`-nvme0n1p3                  259:3    0   931G  0 part
  |-pve-swap                 253:0    0     8G  0 lvm  [SWAP]
  |-pve-root                 253:1    0    96G  0 lvm  /
  |-pve-data_tmeta           253:2    0   8.1G  0 lvm 
  | `-pve-data-tpool         253:4    0 794.8G  0 lvm 
  |   |-pve-data             253:5    0 794.8G  0 lvm 
  |   |-pve-vm--100--disk--0 253:6    0    32G  0 lvm 
  |   |-pve-vm--101--disk--0 253:7    0    32G  0 lvm 
  |   `-pve-vm--101--disk--1 253:8    0    32G  0 lvm 
  `-pve-data_tdata           253:3    0 794.8G  0 lvm 
    `-pve-data-tpool         253:4    0 794.8G  0 lvm 
      |-pve-data             253:5    0 794.8G  0 lvm 
      |-pve-vm--100--disk--0 253:6    0    32G  0 lvm 
      |-pve-vm--101--disk--0 253:7    0    32G  0 lvm 
      `-pve-vm--101--disk--1 253:8    0    32G  0 lvm 

# ls -l /dev/disk/by-*/
/dev/disk/by-id/:
total 0
lrwxrwxrwx 1 root root 10 Mar 30 13:18 dm-name-LVM--DS-LVM--DS -> ../../dm-9
lrwxrwxrwx 1 root root 10 Mar 30 13:18 dm-name-pve-root -> ../../dm-1
lrwxrwxrwx 1 root root 10 Mar 30 13:18 dm-name-pve-swap -> ../../dm-0
lrwxrwxrwx 1 root root 10 Mar 30 16:37 dm-name-pve-vm--100--disk--0 -> ../../dm-6
lrwxrwxrwx 1 root root 10 Mar 30 13:18 dm-name-pve-vm--101--disk--0 -> ../../dm-7
lrwxrwxrwx 1 root root 10 Mar 30 13:18 dm-name-pve-vm--101--disk--1 -> ../../dm-8
lrwxrwxrwx 1 root root 10 Mar 30 16:37 dm-uuid-LVM-02ZbVOcyxlumINQq6GmgxwbHfk9Cg4QvAiYTHy3wHFY1OfJjmh5MUNhj2aV5q4YL -> ../../dm-6
lrwxrwxrwx 1 root root 10 Mar 30 13:18 dm-uuid-LVM-02ZbVOcyxlumINQq6GmgxwbHfk9Cg4QvKZ3QAP6C9gXr2RWbKy4vmT15KoN3QTeW -> ../../dm-1
lrwxrwxrwx 1 root root 10 Mar 30 13:18 dm-uuid-LVM-02ZbVOcyxlumINQq6GmgxwbHfk9Cg4QvNzkii3vSHPkl3JhvaXK23MY6J0S8Z2RL -> ../../dm-0
lrwxrwxrwx 1 root root 10 Mar 30 13:18 dm-uuid-LVM-02ZbVOcyxlumINQq6GmgxwbHfk9Cg4QvYfxMf3bdq0ts1QBBOSXowcMfIYPHdSHD -> ../../dm-8
lrwxrwxrwx 1 root root 10 Mar 30 13:18 dm-uuid-LVM-02ZbVOcyxlumINQq6GmgxwbHfk9Cg4Qvb9YxPi17inoWSRkdpYEyiL4yW2RNVz3e -> ../../dm-7
lrwxrwxrwx 1 root root 10 Mar 30 13:18 dm-uuid-LVM-CuBd01Wynd7PeXFFyBcxjlGfRNA6er2JXobrYZaqtqgewiguVyvw0yvDogqIISwU -> ../../dm-9
lrwxrwxrwx 1 root root 15 Mar 30 13:18 lvm-pv-uuid-1mrV66-t22i-SwGW-P2dx-K3YS-GIuZ-Ehxfh8 -> ../../nvme0n1p3
lrwxrwxrwx 1 root root  9 Mar 30 13:18 lvm-pv-uuid-mz9RDg-f3eI-vUkH-02dS-m0IE-ajLd-0LKLu5 -> ../../sdb
lrwxrwxrwx 1 root root 13 Mar 30 13:18 nvme-Samsung_SSD_970_EVO_1TB_S467NX0K801270W -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Mar 30 13:18 nvme-Samsung_SSD_970_EVO_1TB_S467NX0K801270W-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Mar 30 13:18 nvme-Samsung_SSD_970_EVO_1TB_S467NX0K801270W-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root 15 Mar 30 13:18 nvme-Samsung_SSD_970_EVO_1TB_S467NX0K801270W-part3 -> ../../nvme0n1p3
lrwxrwxrwx 1 root root 13 Mar 30 13:18 nvme-eui.0025385881b0e9aa -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Mar 30 13:18 nvme-eui.0025385881b0e9aa-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Mar 30 13:18 nvme-eui.0025385881b0e9aa-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root 15 Mar 30 13:18 nvme-eui.0025385881b0e9aa-part3 -> ../../nvme0n1p3
lrwxrwxrwx 1 root root  9 Mar 30 13:18 scsi-3600140561d96697dad85d49e5d8f09de -> ../../sdb
lrwxrwxrwx 1 root root  9 Mar 30 13:18 usb-General_USB_Flash_Disk_0501370000001920-0:0 -> ../../sda
lrwxrwxrwx 1 root root 10 Mar 30 13:18 usb-General_USB_Flash_Disk_0501370000001920-0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root  9 Mar 30 13:18 wwn-0x600140561d96697dad85d49e5d8f09de -> ../../sdb
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!