Problems with VM disks.

krogac

Member
Jul 18, 2018
35
0
11
43
Hello,

I have at about two weeks problems with proxmox. When i try do snapshot then sometimes i get error:

Formatting '/msa/VM/images/129/vm-129-state-OK.raw', fmt=raw size=21546139648
snapshot create failed: starting cleanup
TASK ERROR: VM 129 qmp command 'snapshot-drive' failed - Device 'drive-scsi0' has no medium

and my VM are off...

And i have this situation 3 times.. i have errors in my qcow2 disks and my vm are broken...

TASK ERROR: start failed: command '/usr/bin/kvm -id 129 -chardev 'socket,id=qmp,path=/var/run/qemu-server/129.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/129.pid -daemonize -smbios 'type=1,uuid=da2e8c8b-95ac-4de7-9e04-c88cc9a7a422' -name TESTSRV -smp '3,sockets=1,cores=3,maxcpus=3' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga std -vnc unix:/var/run/qemu-server/129.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 10024 -k pl -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:8c4363a6a44b' -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/msa/VM/images/129/vm-129-disk-1.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap129i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=D6:13:72:B6:D5:94,netdev=net0,bus=pci.0,addr=0x12,id=net0'' failed: exit code 1

I must recover my VM from backups...

I use two proxmox servers 5.1-41

kernel 4.13.13-2-pve
 
Last edited:
Today i try remove Snapshot and i give a ERROR:

command '/usr/bin/qemu-img snapshot -d UPDATE /msa/VM/images/123/vm-123-disk-1.qcow2' failed: exit code 1
 
Hi,

Please update to current version.
 
They come always together, so both.
 
Proxmox VE is a multi-master design so there is no "primary server"
You can start where ever you like.
Be sure that if used HA is disabled through the update process.
 
Having a similar situation on a three node cluster.

Vm disks are on Gluster on top of ZFS, Gluster Volume "SHARED" is replicated on the three.
Backups are done on a second Gluster Volume "BACKUPS" on top of another ZFS pool, replicated on the three nodes too.

Two corosync rings (two meshed networks across the three nodes)
Seperate 10G Migration network

Some VMs had issues after last week updates:

- some wouldn't boot with same error (drive-scsi0 has no medium), so restored from backup.
- even though they boot and work, I have had issues with some failing to write to their disks while booted; a reboot seemed to fix it. Gluster info/logs didn't display any abnormal status or errors. Happened both with raw and qcow2 disks.
- some cannot be backed up (snapshot mode), same error (drive-scsi0 has no medium)

In all cases, I could ls, etc the "missing" disk file fine via /mnt/pve

Code:
root@pve1:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.21-1-pve)
pve-manager: 6.0-7 (running version: 6.0-7/28984024)
pve-kernel-5.0: 6.0-7
pve-kernel-helper: 6.0-7
pve-kernel-5.0.21-1-pve: 5.0.21-2
pve-kernel-5.0.18-1-pve: 5.0.18-3
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.11-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-4
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-8
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-64
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-7
pve-cluster: 6.0-7
pve-container: 3.0-7
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.0-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-7
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve2

Gluster:
Code:
root@pve1:~# dpkg -l |grep gluster
ii  glusterfs-client                     5.5-3                       amd64        clustered file-system (client package)
ii  glusterfs-common                     5.5-3                       amd64        GlusterFS common libraries and translator modules
ii  glusterfs-server                     5.5-3                       amd64        clustered file-system (server package)
ii  libglusterfs-dev                     5.5-3                       amd64        Development files for GlusterFS libraries
ii  libglusterfs0:amd64                  5.5-3                       amd64        GlusterFS shared library

Volumes:
SHARED (hosts VM disks)
Code:
root@pve1:~# gluster volume status SHARED detail
Status of volume: SHARED
------------------------------------------------------------------------------
Brick                : Brick pve1:/RunVMs
TCP Port             : N/A              
RDMA Port            : N/A              
Online               : N                
Pid                  : N/A              
File System          : zfs              
Device               : RunVMs           
Mount Options        : rw,xattr,posixacl
Inode Size           : N/A              
Disk Space Free      : 6.5TB            
Total Disk Space     : 7.0TB            
Inode Count          : 13899789199      
Free Inodes          : 13899788714      
------------------------------------------------------------------------------
Brick                : Brick pve2:/RunVMs
TCP Port             : 49153            
RDMA Port            : 0                
Online               : Y                
Pid                  : 4007             
File System          : zfs              
Device               : RunVMs           
Mount Options        : rw,xattr,posixacl
Inode Size           : N/A              
Disk Space Free      : 6.5TB            
Total Disk Space     : 7.0TB            
Inode Count          : 13854266827      
Free Inodes          : 13854266288      
------------------------------------------------------------------------------
Brick                : Brick pve3:/RunVMs
TCP Port             : 49153            
RDMA Port            : 0                
Online               : Y                
Pid                  : 3741             
File System          : zfs              
Device               : RunVMs           
Mount Options        : rw,xattr,posixacl
Inode Size           : N/A              
Disk Space Free      : 6.5TB            
Total Disk Space     : 7.0TB            
Inode Count          : 13854275434      
Free Inodes          : 13854274896


Volume: BACKUP (backup snapshots target)
Code:
root@pve1:~# gluster volume status BACKUP detail


Status of volume: BACKUP
------------------------------------------------------------------------------
Brick                : Brick pve1:/BackVMs
TCP Port             : N/A              
RDMA Port            : N/A              
Online               : N                
Pid                  : N/A              
File System          : zfs              
Device               : BackVMs          
Mount Options        : rw,xattr,posixacl
Inode Size           : N/A              
Disk Space Free      : 4.6TB            
Total Disk Space     : 7.0TB            
Inode Count          : 9846478430       
Free Inodes          : 9846475804       
------------------------------------------------------------------------------
Brick                : Brick pve2:/BackVMs
TCP Port             : 49152            
RDMA Port            : 0                
Online               : Y                
Pid                  : 3986             
File System          : zfs              
Device               : BackVMs          
Mount Options        : rw,xattr,posixacl
Inode Size           : N/A              
Disk Space Free      : 4.6TB            
Total Disk Space     : 7.0TB            
Inode Count          : 9901538081       
Free Inodes          : 9901534989       
------------------------------------------------------------------------------
Brick                : Brick pve3:/BackVMs
TCP Port             : 49152            
RDMA Port            : 0                
Online               : Y                
Pid                  : 3690             
File System          : zfs              
Device               : BackVMs          
Mount Options        : rw,xattr,posixacl
Inode Size           : N/A              
Disk Space Free      : 4.6TB            
Total Disk Space     : 7.0TB            
Inode Count          : 9901540904       
Free Inodes          : 9901537812
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!