Proxmox no start

DaviNunes

New Member
Aug 13, 2019
3
0
1
34
I have a problem that, after a power outage, no machine starts anymore, at all give this error


kvm: -drive file=/dev/vmdata/vm-125-disk-1,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on: Could not open '/dev/vmdata/vm-125-disk-1': No such file or directory
TASK ERROR: start failed: command '/usr/bin/kvm -id 125 -chardev 'socket,id=qmp,path=/var/run/qemu-server/125.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/125.pid -daemonize -smbios 'type=1,uuid=a5911d17-3db7-4ebd-8335-dcf7f777fbd2' -name IXC -smp '4,sockets=2,cores=2,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga std -vnc unix:/var/run/qemu-server/125.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 8192 -k pt-br -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:8fb04d87207e' -drive 'file=/var/lib/vz/template/iso/debianixc_ng.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/vmdata/vm-125-disk-1,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap125i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=6E:04:FE:54:FF:50,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: exit code 1

root@pve1:~# pveversion -v
proxmox-ve: 5.1-32 (running kernel: 4.13.13-2-pve)
pve-manager: 5.1-41 (running version: 5.1-41/0b958203)
pve-kernel-4.13.13-2-pve: 4.13.13-32
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-18
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-5
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9
 
kvm: -drive file=/dev/vmdata/vm-125-disk-1,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on: Could not open '/dev/vmdata/vm-125-disk-1': No such file or directory
seems your storage is not active/available. I guess it's a LVM or LVM thin storage?
* In any case check the journal, why '/dev/vmdata' is not present
* If it's lvm check the output of `pvs`, `vgs`, `lvs -a`

On a unrelated note: PVE 5.1 is quite outdated - please consider upgrading - there have been quite a few fixes (some security relevant) since 5.1 was fresh.

Hope this helps!
 
root@pve1:~# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root pve -wi-ao---- 923.75g
swap pve -wi-ao---- 7.00g
data2 vmdata twi---tz-- 5.46t
[data2_tdata] vmdata Twi------- 5.46t
[data2_tmeta] vmdata ewi------- 88.00m
[lvol0_pmspare] vmdata ewi------- 88.00m
snap_vm-100-disk-1_snapshot20190403 vmdata Vri---tz-k 320.00g data2 vm-100-disk-1
snap_vm-107-disk-1_certup vmdata Vri---tz-k 32.00g data2 vm-107-disk-1
snap_vm-107-disk-1_ok vmdata Vri---tz-k 32.00g data2
snap_vm-107-disk-1_part1 vmdata Vri---tz-k 32.00g data2
snap_vm-107-disk-1_part2 vmdata Vri---tz-k 32.00g data2
snap_vm-107-disk-1_part3 vmdata Vri---tz-k 32.00g data2
snap_vm-109-disk-1_inicio vmdata Vri---tz-k 32.00g data2 vm-109-disk-1
snap_vm-111-disk-1_limpo vmdata Vri---tz-k 32.00g data2
snap_vm-111-disk-1_ok vmdata Vri---tz-k 32.00g data2 vm-111-disk-1
snap_vm-116-disk-1_a16052019 vmdata Vri---tz-k 80.00g data2 vm-116-disk-1
snap_vm-118-disk-1_antes_da_atualizacao vmdata Vri---tz-k 32.00g data2 vm-118-disk-1
snap_vm-124-disk-1_subirparti vmdata Vri---tz-k 65.00g data2 vm-124-disk-1
snap_vm-125-disk-1_antesDoIP vmdata Vri---tz-k 80.00g data2 vm-125-disk-1
snap_vm-125-disk-1_antescoord vmdata Vri---tz-k 80.00g data2 vm-125-disk-1
snap_vm-125-disk-1_atu0902 vmdata Vri---tz-k 80.00g data2 vm-125-disk-1
snap_vm-125-disk-1_backup vmdata Vri---tz-k 80.00g data2 vm-125-disk-1
snap_vm-125-disk-1_coordenada vmdata Vri---tz-k 80.00g data2 vm-125-disk-1
snap_vm-125-disk-1_importacao vmdata Vri---tz-k 80.00g data2 vm-125-disk-1
snap_vm-125-disk-1_index vmdata Vri---tz-k 80.00g data2 vm-125-disk-1
vm-100-disk-1 vmdata Vwi---tz-- 320.00g data2
vm-100-state-snapshot20190403 vmdata Vwi---tz-- 8.49g data2
vm-101-disk-1 vmdata Vwi---tz-- 32.00g data2
vm-102-disk-1 vmdata Vwi---tz-- 32.00g data2
vm-103-disk-1 vmdata Vwi---tz-- 500.00g data2
vm-104-disk-1 vmdata Vwi---tz-- 32.00g data2
vm-105-disk-1 vmdata Vwi---tz-- 32.00g data2
vm-106-disk-1 vmdata Vwi---tz-- 32.00g data2
vm-107-disk-1 vmdata Vwi---tz-- 32.00g data2 snap_vm-107-disk-1_ok
vm-107-state-certup vmdata Vwi---tz-- 8.49g data2
vm-107-state-ok vmdata Vwi---tz-- 8.49g data2
vm-107-state-part1 vmdata Vwi---tz-- 8.49g data2
vm-107-state-part2 vmdata Vwi---tz-- 8.49g data2
vm-107-state-part3 vmdata Vwi---tz-- 8.49g data2
vm-108-disk-1 vmdata Vwi---tz-- 32.00g data2
vm-109-disk-1 vmdata Vwi---tz-- 32.00g data2
vm-109-state-inicio vmdata Vwi---tz-- 8.49g data2
vm-110-disk-1 vmdata Vwi---tz-- 32.00g data2
vm-111-disk-1 vmdata Vwi---tz-- 32.00g data2 snap_vm-111-disk-1_limpo
vm-111-state-limpo vmdata Vwi---tz-- 8.49g data2
vm-111-state-ok vmdata Vwi---tz-- 8.49g data2
vm-112-disk-1 vmdata Vwi---tz-- 500.00g data2
vm-113-disk-1 vmdata Vwi---tz-- 32.00g data2
vm-114-disk-1 vmdata Vwi---tz-- 800.00g data2
vm-115-disk-1 vmdata Vwi---tz-- 50.00g data2
vm-116-disk-1 vmdata Vwi---tz-- 80.00g data2
vm-116-state-a16052019 vmdata Vwi---tz-- 28.49g data2
vm-117-disk-1 vmdata Vwi---tz-- 750.00g data2
vm-118-disk-1 vmdata Vwi---tz-- 32.00g data2
vm-118-state-antes_da_atualizacao vmdata Vwi---tz-- 4.49g data2
vm-120-disk-1 vmdata Vwi---tz-- 32.00g data2
vm-121-disk-1 vmdata Vwi---tz-- 32.00g data2
vm-123-disk-1 vmdata Vwi---tz-- 32.00g data2
vm-124-disk-1 vmdata Vwi---tz-- 65.00g data2
vm-124-state-subirparti vmdata Vwi---tz-- 16.49g data2
vm-125-disk-1 vmdata Vwi---tz-- 80.00g data2
vm-125-state-antesDoIP vmdata Vwi---tz-- 8.49g data2
vm-125-state-antescoord vmdata Vwi---tz-- 8.49g data2
vm-125-state-atu0902 vmdata Vwi---tz-- 16.49g data2
vm-125-state-backup vmdata Vwi---tz-- 8.49g data2
vm-125-state-coordenada vmdata Vwi---tz-- 8.49g data2
vm-125-state-importacao vmdata Vwi---tz-- 8.49g data2
vm-125-state-index vmdata Vwi---tz-- 16.49g data2
vm-999-disk-1 vmdata Vwi---tz-- 32.00g data2

root@pve1:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 2 0 wz--n- 930.75g 0
vmdata 1 64 0 wz--n- 5.46t 0

root@pve1:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- 930.75g 0
/dev/sdb1 vmdata lvm2 a-- 5.46t 0

about the update, right, i'm aware, but i keep it in an out of band
 
hmm - seems the thinpool is not active...
try:
`lvchange -ay`
and check the output of `dmesg` and `journalctl -b`
 
root@pve1:~# lvchange -a y vmdata
Check of pool vmdata/data2 failed (status:1). Manual repair required!
root@pve1:~# lvconvert --repair vmdata
Please specify a logical volume path.
Run `lvconvert --help' for more information.
root@pve1:~# lvconvert --repair vmdata/data2
Using default stripesize 64.00 KiB.
Transaction id 111 from pool "vmdata/data2" does not match repaired transaction id 110 from /dev/mapper/vmdata-lvol0_pmspare.
WARNING: recovery of pools without pool metadata spare LV is not automated.
WARNING: If everything works, remove vmdata/data2_meta0 volume.
WARNING: Use pvmove command to move vmdata/data2_tmeta on the best fitting PV.
root@pve1:~# lvchange -a y vmdata
root@pve1:~# reboot


after a beautiful race it was solved as above
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!