Hello,
Got strange problem after updating Proxmox 4.4 to minor updates, Just restart the Server and HDD is missing in VM.
Disk was stored in GlusterFS, other Images are still there, only HDD is missing.
Here is the error:
In the /GlusterFS/images/
there are directories with VM ID's 100 101 102 103 but empty.
I just don't understand what happened?
Pve Version
The Server is Clustered and I updated the other Servers too, but didn't restart, since I didn't find any VM HDD's in GlusterFS storage, I try to backup running VM, but here is the error I got.
Edit: Well I restart the server to check if it resolve the issue, but it just got worse, didn't start now.
Got strange problem after updating Proxmox 4.4 to minor updates, Just restart the Server and HDD is missing in VM.
Disk was stored in GlusterFS, other Images are still there, only HDD is missing.
Here is the error:
kvm: -drive file=gluster://10.10.2.11/Gluster-FS/images/101/vm-101-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,cache=none,aio=native,detect-zeroes=on: Could not read qcow2 header: Input/output error
TASK ERROR: start failed: command '/usr/bin/kvm -id 101 ..... failed: exit code 1
In the /GlusterFS/images/
there are directories with VM ID's 100 101 102 103 but empty.
I just don't understand what happened?
Pve Version
Code:
proxmox-ve: 4.4-89 (running kernel: 4.4.67-1-pve)
pve-manager: 4.4-13 (running version: 4.4-13/7ea56165)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.35-2-pve: 4.4.35-79
pve-kernel-4.4.67-1-pve: 4.4.67-89
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-52
qemu-server: 4.0-110
pve-firmware: 1.1-11
libpve-common-perl: 4.0-95
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-100
pve-firewall: 2.0-33
pve-ha-manager: 1.0-41
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
The Server is Clustered and I updated the other Servers too, but didn't restart, since I didn't find any VM HDD's in GlusterFS storage, I try to backup running VM, but here is the error I got.
Code:
INFO: starting new backup job: vzdump 102 --node enps2 --remove 0 --mode snapshot --storage Gluster-FS --compress lzo
INFO: Starting Backup of VM 102 (qemu)
INFO: status = running
INFO: update VM 102: -lock backup
INFO: VM Name: Main-DC
INFO: include disk 'virtio0' 'Gluster-FS:102/vm-102-disk-1.qcow2' 50G
qemu-img: Could not open '/mnt/pve/Gluster-FS/images/102/vm-102-disk-1.qcow2': Could not read image for determining its format: Input/output error
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/mnt/pve/Gluster-FS/dump/vzdump-qemu-102-2017_06_14-13_12_14.vma.lzo'
ERROR: Device 'drive-virtio0' has no medium
INFO: aborting backup job
ERROR: Backup of VM 102 failed - Device 'drive-virtio0' has no medium
INFO: Backup job finished with errors
TASK ERROR: job errors
Edit: Well I restart the server to check if it resolve the issue, but it just got worse, didn't start now.
Code:
VM 101 not running
TASK ERROR: Failed to run vncproxy.
Last edited: