[SOLVED] PVE cluster, access to local storage . Error 500 volume *** doesn't exist

msidiagnos

Member
Nov 21, 2019
10
0
21
42
Hello,

I had only one PM node (ver.6) and recently I decided to add one new node (also ver.6) and made a PM cluster. The cluster currently consists of two nodes and everything was Ok.

Let's name them like:

node1
2 disk for PVE (ZFS mirror)
2 disks for VMs (ZFS mirror)
1 disk for ZFS cache

node2
1 disk for PVE (ZFS single)
2 disks for VMs (ZFS mirror)

node1 and node2 have own local storage with templates and ISO images which I downloaded into. I used them for creating new virtual machines.
After cluster was set up when I tried to add ISO into virtual machine CD-ROM drive I saw the next error:

Screenshot_45.png

Please, help me to solve this problem. Now I can't mount ISO images into my existing VM and can't create new VMs.

storage.cfg listing

dir: local_node1
path /var/lib/vz
content iso,vztmpl
nodes node2
shared 0

dir: local_node2
path /var/lib/vz
content iso,vztmpl
nodes node1
shared 0

dir: local
disable
path /var/lib/vz
content iso,vztmpl
shared 0

zfspool: local-zfs
pool rpool/data
content rootdir,images
nodes node1
sparse 1

zfspool: some_zfs_pool1
pool some_zfs_pool1
content images,rootdir
nodes node1
sparse 0

zfspool: some_zfs_pool2
pool some_zfs_pool2
content images,rootdir
nodes node2
sparse 0

nfs: BACKUP
export /vm_backup
path /mnt/pve/BACKUP
server ***.***.***.***
content images,rootdir,vztmpl,iso,snippets,backup
maxfiles 6
nodes node1

nfs: BACKUP_NODE2
export /vm_backup_node2
path /mnt/pve/BACKUP_NODE2
server ***.***.***.***
content iso,snippets,backup,rootdir,vztmpl
maxfiles 6
nodes node2
 
please do not bump your post without adding meaningful information

first, you show a storage 'local_sm-hv-01' in the screenshot, but it is not there in your storage.cfg output?
second, are you sure the iso exists on the node you want to create the vm an in the node you connect via browser?
 
please do not bump your post without adding meaningful information

first, you show a storage 'local_sm-hv-01' in the screenshot, but it is not there in your storage.cfg output?
second, are you sure the iso exists on the node you want to create the vm an in the node you connect via browser?

I'm very sorry.

1) I replaced the real name of the storages and it was a bad idea. See below the correct storage.cfg

Bash:
dir: local_sm-hv-02

        path /var/lib/vz
        content iso,vztmpl
        nodes sm-hv-02
        shared 0

dir: local_sm-hv-01
        path /var/lib/vz
        content iso,vztmpl
        nodes sm-hv-01
        shared 0

dir: local
        disable
        path /var/lib/vz
        content iso,vztmpl
        shared 0

zfspool: local-zfs
        pool rpool/data
        content rootdir,images
        nodes sm-hv-01
        sparse 1

zfspool: datastore_6tb
        pool datastore_6tb
        content images,rootdir
        nodes sm-hv-01
        sparse 0


zfspool: datastore_4tb_sm_hv_02
        pool datastore_4tb_sm_hv_02
        content images,rootdir
        nodes sm-hv-02
        sparse 0

nfs: BACKUP
        export /vm_backup
        path /mnt/pve/BACKUP
        server ***.***.***.***
        content images,rootdir,vztmpl,iso,snippets,backup
        maxfiles 6
        nodes sm-hv-01

nfs: BACKUP_TESTLAB
        export /vm_backup_testlab
        path /mnt/pve/BACKUP_TESTLAB
        server ***.***.***.***
        content iso,snippets,backup,rootdir,vztmpl
        maxfiles 6
        nodes sm-hv-02


2) Yes, I'm sure. I can see ISO images via browser and terminal

root@sm-hv-01

cd /var/lib/vz/template/iso
ls -la

Bash:
-rw-r--r-- 1 root root  917698560 Apr  2 17:09 CentOS-7-aarch64-Minimal-1908.iso
-rw-r--r-- 1 root root  713031680 Dec  2 14:21 CentOS-7-x86_64-Minimal-1611.iso
-rw-r--r-- 1 root root 5397889024 Nov 14 14:19 en_windows_server_2012_r2_with_update_x64_dvd_6052708.iso
-rw-r--r-- 1 root root 5034489856 Dec 20 10:18 ru_windows_10_business_editions_version_1903_updated_dec_2019_x64_dvd_88497dba.iso
-rw-r--r-- 1 root root 4073515008 Nov 22 11:34 ru_windows_10_multiple_editions_version_1607_updated_jul_2016_x64_dvd_9058201.iso
[B]-rw-r--r-- 1 root root  865075200 Nov 27 10:32 ubuntu-16.04.3-server-amd64.iso[/B]
-rw-r--r-- 1 root root  851443712 Mar 16 10:55 ubuntu-18.04.1-live-server-amd64.iso
-rw-r--r-- 1 root root  371732480 Nov 23 17:31 virtio-win-0.1.171.iso

Screenshot_58.png
 
Code:
dir: local_sm-hv-02

        path /var/lib/vz
        content iso,vztmpl
        nodes sm-hv-02
        shared 0

dir: local_sm-hv-01
        path /var/lib/vz
        content iso,vztmpl
        nodes sm-hv-01
        shared 0

dir: local
        disable
        path /var/lib/vz
        content iso,vztmpl
        shared 0

why do you do it this way?
just use 'local' on both nodes?
please try to fix this first

also, on which node did you try to set the iso in the first post?

after that what does (on both nodes) is the output of
Code:
pvesm path local_sm-hv-01:iso/ubuntu-16.04.3-server-amd64.iso
or
Code:
pvesm path local:iso/ubuntu-16.04.3-server-amd64.iso
if use use local now

also the pveversion -v output would be interesting
 
dcsapak

Thank you for your attention.

just use 'local' on both nodes?
I enabled local storage for all nodes.

also, on which node did you try to set the iso in the first post?
I tried to set up ISO on all nodes

pvesm path local:iso/ubuntu-16.04.3-server-amd64.iso

sm-hv-01 node
Bash:
root@sm-hv-01:~# pvesm path local:iso/ubuntu-16.04.3-server-amd64.iso
/var/lib/vz/template/iso/ubuntu-16.04.3-server-amd64.iso

sm-hv-02 node
Bash:
root@sm-hv-02:~# pvesm path local:iso/ubuntu-16.04.3-server-amd64.iso
/var/lib/vz/template/iso/ubuntu-16.04.3-server-am

pveversion -v
Bash:
root@sm-hv-01:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-3
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-5
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.2.0-1
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-5
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
 
mhmm can you upgrade to the current version (6.1) and try again?
this is basic functionality and should always work... and there is not really any clue why this fails for you
 
mhmm can you upgrade to the current version (6.1) and try again?
this is basic functionality and should always work... and there is not really any clue why this fails for you

It's impossible now and I don't sure that the upgrade will solve the problem.

I uploaded iso image to the NFS storage (NFS storage is mounted to the host for VMs backup) and tried to mount this ISO into a virtual machine CD/DVD drive. I got the same error.
 
I added another local storage. When I tried to add ISO image from this storage into a virtual machine CD/DVD drive I got the same error.

I made only 2 actions before this basic function broke:

1) Set-up PVE cluster;
2) has upgraded pve-qemu-kvm packet from 4.0.3 (as far I remember) to 4.2.0-1
 
Can it be a reason for the problem?

1) Proxmox for the first node was installed on 2 disks (ZFS mirror)
2) Proxmox for the second node was installed on local SSD disk (with default partitions creating)
 
again, it does not make sense to troubleshoot without upgrading to the latest (supported) packages.
it might have been a bug (though i am not aware of it) that is long fixed already

having different storage configurations on different nodes is not ideal, but should not interfere with such a basic functionality
 
Upgrade to Proxmox 6.2.4 solved this problem. But I have no idea what happened and why this basic function broke.

Bash:
root@sm-hv-02:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.41-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-2
pve-kernel-helper: 6.2-2
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-2
pve-cluster: 6.1-8
pve-container: 3.1-6
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!