/dev/zvol/rpool/data/... No such file or directory

decibel83

Renowned Member
Oct 15, 2008
210
1
83
Hi.
I have a Proxmox 5.1 with a root ZFS RAID1 pool and one Windows Server 2008 R2 virtual machine.
When the VM was running I added a new VIRTIO hard drive on the local-zfs storage and I received an error.
So I stopped the VM and tried to restart it.
After this I am not able to start any virtual machine at all, because I am receiving the following error:

Code:
kvm: -drive file=/dev/zvol/rpool/data/vm-104-disk-1,if=none,id=drive-virtio0,format=raw,cache=none,aio=native,detect-zeroes=on: Could not open '/dev/zvol/rpool/data/vm-104-disk-1': No such file or directory
TASK ERROR: start failed: command '/usr/bin/kvm -id 104 -chardev 'socket,id=qmp,path=/var/run/qemu-server/104.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/104.pid -daemonize -smbios 'type=1,uuid=af91ad07-bdad-4b40-b4e7-bc1a8a96de62' -name server-w -smp '1,sockets=1,cores=1,maxcpus=1' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga std -vnc unix:/var/run/qemu-server/104.vnc,x509,password -no-hpet -cpu 'kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,enforce' -m 4096 -k it -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:778428d88097' -drive 'file=/var/lib/vz/template/iso/virtio-win-0.1.126.iso,if=none,id=drive-ide0,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=200' -drive 'file=/var/lib/vz/template/iso/SW_DVD5_Windows_Svr_DC_EE_SE_Web_2008_R2_64Bit_English_w_SP1_MLF_X17-22580.ISO,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=201' -drive 'file=/dev/zvol/rpool/data/vm-104-disk-1,if=none,id=drive-virtio0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -drive 'file=/dev/zvol/rpool/data/vm-104-disk-2,if=none,id=drive-virtio1,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb' -netdev 'type=tap,id=net0,ifname=tap104i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=82:98:6B:86:4B:25,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime' -global 'kvm-pit.lost_tick_policy=discard'' failed: exit code 1

In fact the /dev/zvol path is missing, so the error is correct!

Volumes are correctly listed:

Code:
root@node1:~# zfs list
NAME                       USED  AVAIL  REFER  MOUNTPOINT
rpool                      445G   415G    96K  /rpool
rpool/ROOT                 177G   415G    96K  /rpool/ROOT
rpool/ROOT/pve-1           177G   415G   177G  /
rpool/data                 260G   415G    96K  /rpool/data
rpool/data/vm-101-disk-1   694M   415G   694M  -
rpool/data/vm-102-disk-1  10.6G   415G  10.6G  -
rpool/data/vm-102-disk-2  3.60G   415G  3.60G  -
rpool/data/vm-103-disk-1  4.00G   415G  4.00G  -
rpool/data/vm-104-disk-1  32.1G   415G  32.1G  -
rpool/data/vm-201-disk-1   673M   415G   673M  -
rpool/data/vm-202-disk-1  28.6G   415G  28.6G  -
rpool/data/vm-999-disk-1   179G   415G   179G  -
rpool/swap                8.50G   420G  3.54G  -

These are my versions:

Code:
root@node1:~# pveversion -v
proxmox-ve: 5.1-26 (running kernel: 4.10.17-2-pve)
pve-manager: 5.1-36 (running version: 5.1-36/131401db)
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.10.17-2-pve: 4.10.17-20
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-2
pve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9

Could you help me please?

Thanks!
 
your are running an outdated kernel with ZFS 0.6.5, but new ZFS user space (0.7.3). reboot into a kernel with 0.7.3 modules.
 
proxmox-ve: not correctly installed (running kernel: 4.4.35-1-pve)
pve-manager: 5.1-36 (running version: 5.1-36/131401db)
pve-kernel-4.4.35-1-pve: 4.4.35-76
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-2
pve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9


This happened when the box did a hard power drop. Early this morning. It was running fine before hand.
 
Anyone gonna help explain why this Proxmox is so unstable that a simple hard reboot can kill it???????
 
you run an mix of outdated packages with known bugs. follow the install/upgrade howto and try again.
 
You think I didn't do all that??? Which packages are out of date so I can update them manually
 
Your guide is useless for upgrading that as it does not show as needing upgrading as per apt dist-upgrade
 
root@pve2:~# apt upgrade proxmox-ve
Reading package lists... Done
Building dependency tree
Reading state information... Done
proxmox-ve is already the newest version (5.1-26).
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
 
We fixed it ourselves. Thank god for having someone who knows Linux on staff because your help was useless
 
@Ascent Could you share what your fix was in the end? I agree with you; the support provided in this thread has been less than useful.

@tom You really need to pay attention to detail here.

I just upgraded from Proxmox VE 5.0 to 5.1 (followed the instructions to the letter) have the expected versions and yet my ZFS ZVOL(s) are broken and /dev/zvol is missing.
 
So I guess I too solve this.

TL;DR: Remove conflicting zfs deps; re-install zfs.

Basically a bunch of manual dpkg -r removing crap until you can apt-get install zfsutils

@tom Really not sure what your QA is over there for Proxmox with live upgrades but this is 2/2 where Proxmox VE upgrades have just miserably failed for me. If my infra were any larger with more customers/money involved this would be a SEV.
 
Please test live upgrades before pushing a new version of ProxmoxVE! Don't break us; we don't have time to be digging around in the bowls of Debian :/ -- Just make it work flawlessly!
 
Please test live upgrades before pushing a new version of ProxmoxVE! Don't break us; we don't have time to be digging around in the bowls of Debian :/ -- Just make it work flawlessly!

Please do not write crap, we invest hundreds of hours in testing every point release. Almost all problems are cause by faulty hardware or admins who are not reading the upgrade docs.
If you have issues, we are here to help.
 
@tom You are right. This was my "fuck up". It appears (thank you to z_g on Freenode) that I once did an (incorrect) `apt-get upgrade` which apparently isn't the way you upgrade Proxmox VE due to the nature of Proxmox VE being a rolling release during stable Debian major versions and so the usual Debian semantics don't apply here (or will bite you in the ass).

@tom The reason for my frustrating sounding post/reply was your unwillingness to help @Ascent and just point at a bunch of docs that don't really help.

If you are going to do anything; improve the documentation -- to at least point out very clearly:

DO NOT Run: `apt-get ugrade` (ever!).
 
No @tom you are wrong.

People do read the docs. But if the docs are disorganized or don't contain the "right" information you tend to gloss over them.

As a person that has very poor sight (I'm legally blind); docs are great if they are:
  • accurate
  • terse
  • discoverable
cheers
James
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!