Error when try to open hardware tab in nodes

afrugone

Renowned Member
Nov 26, 2008
106
0
81
When I try to create a KVM guest or open the hardware tab on a existing KVM guest on any node server I get this error:

[5708]ERR: 24: Error in Perl code: 500 read timeout

I've looked for in the forum, but I didn't find anything useful for this error:

The configuration is a 3 nodes cluster PVE1 (master) and Nodes PVE2 and PVE0. I guess is something related with storage, as show in pvesm list -a (see bellow).

Any help is wellcome,

Thanks
Alfredo

All running the latest PVE version as follows:

PVE01:~# pveversion -v PVE02:~# pveversion -v
pve-manager: 1.7-11 pve-manager: 1.7-11
(pve-manager/1.7/5470) (pve-manager/1.7/5470)
running kernel: 2.6.32-4-pve running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.7-30 proxmox-ve-2.6.32: 1.7-30
pve-kernel-2.6.32-4-pve: pve-kernel-2.6.32-4-pve:
2.6.32-30 2.6.32-30
qemu-server: 1.1-28 qemu-server: 1.1-28
pve-firmware: 1.0-10 pve-firmware: 1.0-10
libpve-storage-perl: 1.0-16 libpve-storage-perl: 1.0-16
vncterm: 0.9-2 vncterm: 0.9-2
vzctl: 3.0.24-1pve4 vzctl: 3.0.24-1pve4
vzdump: 1.2-10 vzdump: 1.2-10
vzprocps: 2.0.11-1dso2 vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1 vzquota: 3.0.11-1
pve-qemu-kvm: 0.13.0-3 pve-qemu-kvm: 0.13.0-3
ksm-control-daemon: 1.0-4 ksm-control-daemon: 1.0-4

PVE01:~# pvesm list -a
local:123/vm-123-disk-1.raw 123 raw 52428800
KVM:0.0.0.scsi-330000000eeafaaf9 0 raw 52428800
VM_SAN_1:0.0.0.dm-uuid-LVM-L....... 0 raw 33554432
VM_SAN_LVM_1:vm-101-disk-1 101 raw 33554432

PVE02:~# pvesm list -a
local:111/vm-111-disk-1.raw 111 raw 67108864
mount.nfs: mount to NFS server '172.27.110.128' failed: timed out, retrying
mount.nfs: mount to NFS server '172.27.110.128' failed: timed out, retrying
mount.nfs: mount to NFS server '172.27.110.128' failed: timed out, retrying
mount.nfs: mount to NFS server '172.27.110.128' failed: timed out, retrying
mount.nfs: mount to NFS server '172.27.110.128' failed: timed out, giving up
command '/bin/mount -t nfs 172.27.110.128:/mnt/Data1 /mnt/pve/NAS2' failed with exit code 32

pve0:~# pvesm list -a
local:106/vm-106-disk-1.raw 106 raw 33554432
local:108/vm-108-disk-1.raw 108 raw 33554432
local:121/vm-121-disk-1.raw 121 raw 33554432
local:122/vm-122-disk-1.raw 122 raw 33554432
local:123/vm-123-disk-1.raw 123 raw 33554432
mount.nfs: mount to NFS server '172.27.110.128' failed: timed out, retrying
mount.nfs: mount to NFS server '172.27.110.128' failed: timed out, retrying
mount.nfs: mount to NFS server '172.27.110.128' failed: timed out, retrying
mount.nfs: mount to NFS server '172.27.110.128' failed: timed out, retrying
mount.nfs: mount to NFS server '172.27.110.128' failed: timed out, giving up
command '/bin/mount -t nfs 172.27.110.128:/mnt/Data1 /mnt/pve/NAS2' failed with exit code 32
 
Last edited:
Ok, I understand that, but on Master server is working without problem and /etc/pve/storage.conf is the same in all servers, And all backups are going to the NFS share, so I'll have to reconfigure all my backups. If there is not other solution I'll do it.

Thanks for your Help.
 
Dietmar,

Thanks for you help, but I don't think that is a permission problem, the NFS server is open. Anyway I'll remove the NFS and configure again, if the problem persist I'll tell you.

Thanks and Regards
Alfredo
 
I delete the NFS from config, now I can access the hardware tab, and seems to work without problem, but when I execute the pvesm list -a in a node I get the following, unfortunatelly I can not remode this iSCSI share because we have some KVMs running on it, and is running in the same node PVE02


PVE02:~# pvesm list -a
local:111/vm-111-disk-1.raw 111 raw 67108864
VM_SAN_1:0.0.0.dm-uuid-LVM-LEYfZjo0bbSQyAfplc70CJX4RpUfpZUoXEcEgtxQ20PfjF74dStgHmENA9ANjK1N 0 raw 33554432
Found duplicate PV 0i1josKHLUxZjrgL8s5W5PvFJxA1NV7d: using /dev/sdc not /dev/sdb
Found duplicate PV 0i1josKHLUxZjrgL8s5W5PvFJxA1NV7d: using /dev/sdc not /dev/sdb
Found duplicate PV 0i1josKHLUxZjrgL8s5W5PvFJxA1NV7d: using /dev/sdc not /dev/sdb
VM_SAN_LVM_1:vm-101-disk-1 101 raw 33554432
 
I delete the NFS from config, now I can access the hardware tab, and seems to work without problem, but when I execute the pvesm list -a in a node I get the following, unfortunatelly I can not remode this iSCSI share because we have some KVMs running on it, and is running in the same node PVE02


PVE02:~# pvesm list -a
local:111/vm-111-disk-1.raw 111 raw 67108864
VM_SAN_1:0.0.0.dm-uuid-LVM-LEYfZjo0bbSQyAfplc70CJX4RpUfpZUoXEcEgtxQ20PfjF74dStgHmENA9ANjK1N 0 raw 33554432
Found duplicate PV 0i1josKHLUxZjrgL8s5W5PvFJxA1NV7d: using /dev/sdc not /dev/sdb
Found duplicate PV 0i1josKHLUxZjrgL8s5W5PvFJxA1NV7d: using /dev/sdc not /dev/sdb
Found duplicate PV 0i1josKHLUxZjrgL8s5W5PvFJxA1NV7d: using /dev/sdc not /dev/sdb
VM_SAN_LVM_1:vm-101-disk-1 101 raw 33554432
Hi,
this messages shows, that your iscsi-disk are usable on two path - e.g. multipathing?! sdb and sdc are the same disks but connected over different networks?!
You can enable multipath on linux - search in the forum, some time ago someone has post a successfully usecase (if i'm right).

Udo
 
Hi,
this messages shows, that your iscsi-disk are usable on two path - e.g. multipathing?! sdb and sdc are the same disks but connected over different networks?!
You can enable multipath on linux - search in the forum, some time ago someone has post a successfully usecase (if i'm right).

Udo

Hi guys,

I have the same issue. My cluster includes 1 master and 2 nodes. I've created a volume group within the master and when I try to access hardware tab on the others nodes I get the error message :

[1733]ERR: 24: Error in Perl code: command '/sbin/vgchange -aly drbdvg2' failed with exit code 5

hampton:/etc/pve# pvesm list -a
local:101/vm-101-disk-1.raw 101 raw 10485760
Volume group "drbdvg2" not found
command '/sbin/vgchange -aly drbdvg2' failed with exit code 5

Regards
 
Hi guys,

I have the same issue. My cluster includes 1 master and 2 nodes. I've created a volume group within the master and when I try to access hardware tab on the others nodes I get the error message :

[1733]ERR: 24: Error in Perl code: command '/sbin/vgchange -aly drbdvg2' failed with exit code 5

hampton:/etc/pve# pvesm list -a
local:101/vm-101-disk-1.raw 101 raw 10485760
Volume group "drbdvg2" not found
command '/sbin/vgchange -aly drbdvg2' failed with exit code 5

Regards
Hi,
in a cluster (1.x) must be all shared storage reachable on all nodes. In your case the drbd-volumes are only accessible on two nodes.
You can do an hack - create simply on the other node the same named volumegroup on a usb-stick.

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!