My VM doesn't boot anymore. The VM disk is on a ZFS. SMART passed. Any ideas?

Code:
vm-101-disk-0 -> ../../zd0
vm-101-disk-1 -> ../../zd16
vm-101-disk-1-part1 -> ../../zd16p1
vm-101-disk-1-part2 -> ../../zd16p2

I am now even more confused, the zd16 is clearly a zvol that you then split into 2 partitions on the VM.

What is the zd0 (disk-0) and how did it come to existence?

So far still no errors.
The errors that showed before the fix were for both zd0 and zd16.

And why was it being written to?

Does that mean I need to run another command for drive 0 (EFI)?

No.

Also, do you know if the available space on Nextcloud would be reduced after the fix?

REFRESERV does not take away any space from you, it's a parody for thick provisioning on ZFS, in my opinion, anyhow. ;) Meaning, I do not use it, others might find it useful, I believe the whole point of ZFS is to thin provision everything and use quotas if need be instead. Why did you use ZFS, just because of having 6 drives and wanted fault tolerance 2?
 
That's the EFI drive.

But look above at:

Code:
vm-101-disk-0 -> ../../zd0
vm-101-disk-1 -> ../../zd16
vm-101-disk-1-part1 -> ../../zd16p1
vm-101-disk-1-part2 -> ../../zd16p2

You have 2 zvols - and the second one with 2 partitions and inside your VM you showed only /dev/sda being split into 2 partitions that would correspond to disk-1-part1 and disk1-part2. Where's the disk-0, e.g. /dev/sdb?
 
Code:
P: /devices/virtual/block/zd0
M: zd0
R: 0
U: block
T: disk
D: b 230:0
N: zd0
L: 0
S: disk/by-diskseq/28
S: zvol/Nextcloud/vm-101-disk-0
Q: 28
E: DEVPATH=/devices/virtual/block/zd0
E: DEVNAME=/dev/zd0
E: DEVTYPE=disk
E: DISKSEQ=28
E: MAJOR=230
E: MINOR=0
E: SUBSYSTEM=block
E: USEC_INITIALIZED=106391789
E: DEVLINKS=/dev/disk/by-diskseq/28 /dev/zvol/Nextcloud/vm-101-disk-0
E: TAGS=:systemd:
E: CURRENT_TAGS=:systemd:

P: /devices/virtual/block/zd16
M: zd16
R: 16
U: block
T: disk
D: b 230:16
N: zd16
L: 0
S: disk/by-diskseq/29
S: zvol/Nextcloud/vm-101-disk-1
Q: 29
E: DEVPATH=/devices/virtual/block/zd16
E: DEVNAME=/dev/zd16
E: DEVTYPE=disk
E: DISKSEQ=29
E: MAJOR=230
E: MINOR=16
E: SUBSYSTEM=block
E: USEC_INITIALIZED=106686460
E: ID_PART_TABLE_UUID=575bac0f-86cb-430a-86ce-942a1f8bf0b5
E: ID_PART_TABLE_TYPE=gpt
E: DEVLINKS=/dev/disk/by-diskseq/29 /dev/zvol/Nextcloud/vm-101-disk-1
E: TAGS=:systemd:
E: CURRENT_TAGS=:systemd:

P: /devices/virtual/block/zd16/zd16p1
M: zd16p1
R: 1
U: block
T: partition
D: b 230:17
N: zd16p1
L: 0
S: zvol/Nextcloud/vm-101-disk-1-part1
S: disk/by-partuuid/c2b3c131-4515-420d-ae6a-04d08edc96ca
S: disk/by-uuid/A521-A12C
Q: 29
E: DEVPATH=/devices/virtual/block/zd16/zd16p1
E: DEVNAME=/dev/zd16p1
E: DEVTYPE=partition
E: DISKSEQ=29
E: PARTN=1
E: MAJOR=230
E: MINOR=17
E: SUBSYSTEM=block
E: USEC_INITIALIZED=41406196812
E: ID_PART_TABLE_UUID=575bac0f-86cb-430a-86ce-942a1f8bf0b5
E: ID_PART_TABLE_TYPE=gpt
E: ID_FS_UUID=A521-A12C
E: ID_FS_UUID_ENC=A521-A12C
E: ID_FS_VERSION=FAT32
E: ID_FS_TYPE=vfat
E: ID_FS_USAGE=filesystem
E: ID_PART_ENTRY_SCHEME=gpt
E: ID_PART_ENTRY_UUID=c2b3c131-4515-420d-ae6a-04d08edc96ca
E: ID_PART_ENTRY_TYPE=c12a7328-f81f-11d2-ba4b-00a0c93ec93b
E: ID_PART_ENTRY_NUMBER=1
E: ID_PART_ENTRY_OFFSET=2048
E: ID_PART_ENTRY_SIZE=2201600
E: ID_PART_ENTRY_DISK=230:16
E: DEVLINKS=/dev/zvol/Nextcloud/vm-101-disk-1-part1 /dev/disk/by-partuuid/c2b3c131-4515-420d-ae6a-04d08edc96ca /dev/disk/by-uuid/A521-A12C
E: TAGS=:systemd:
E: CURRENT_TAGS=:systemd:

P: /devices/virtual/block/zd16/zd16p2
M: zd16p2
R: 2
U: block
T: partition
D: b 230:18
N: zd16p2
L: 0
S: disk/by-uuid/ae39afec-04d5-4293-a48c-2c008d0a9497
S: zvol/Nextcloud/vm-101-disk-1-part2
S: disk/by-partuuid/4bfc3044-0c69-4d5a-82dc-37cd1d680213
Q: 29
E: DEVPATH=/devices/virtual/block/zd16/zd16p2
E: DEVNAME=/dev/zd16p2
E: DEVTYPE=partition
E: DISKSEQ=29
E: PARTN=2
E: MAJOR=230
E: MINOR=18
E: SUBSYSTEM=block
E: USEC_INITIALIZED=41406199623
E: ID_PART_TABLE_UUID=575bac0f-86cb-430a-86ce-942a1f8bf0b5
E: ID_PART_TABLE_TYPE=gpt
E: ID_FS_UUID=ae39afec-04d5-4293-a48c-2c008d0a9497
E: ID_FS_UUID_ENC=ae39afec-04d5-4293-a48c-2c008d0a9497
E: ID_FS_VERSION=1.0
E: ID_FS_TYPE=ext4
E: ID_FS_USAGE=filesystem
E: ID_PART_ENTRY_SCHEME=gpt
E: ID_PART_ENTRY_UUID=4bfc3044-0c69-4d5a-82dc-37cd1d680213
E: ID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4
E: ID_PART_ENTRY_NUMBER=2
E: ID_PART_ENTRY_OFFSET=2203648
E: ID_PART_ENTRY_SIZE=7437758464
E: ID_PART_ENTRY_DISK=230:16
E: DEVLINKS=/dev/disk/by-uuid/ae39afec-04d5-4293-a48c-2c008d0a9497 /dev/zvol/Nextcloud/vm-101-disk-1-part2 /dev/disk/by-partuuid/4bfc3044-0c69-4d5a-82dc-37cd1d680213
E: TAGS=:systemd:
E: CURRENT_TAGS=:systemd:
 
Ok, from what you posted, I will just summarize this (before I confuse myself perhaps even more:D):

You have 1MB ZVOL disk0 which appears not to be used at all by the VM. Then there's the 3T+ ZVOL which is disk1 which presents itself within the VM as dev sda, split into two partitions- a 1GB EFI and the rest being the VM root partition. This is somehow what one would expect from a Ubuntu install (to auto-partition the drive given to itself a single meaningful ZVOL given to it). I can only speculate the 1MB zvol was some attempt at BIOS BOOT partition but clearly it's not even used by the VM (unless you cropped that output, but it does not really matter).

I do not want to detract from troubleshooting the original issue at hand and since you were getting buffer IO errors from both zvols, i.e. even the one not even in use, I can only revert back to my earlier hypothesis [1] above here.

All the rest, i.e. why use a zvol in this manner and where that mysterious 1MB unused zvol comes from - probably having followed some nextcloud install guide? - is not really a cause for your buffer IO errors, it was just a source of confusion (for me anyways), surely it's suboptimal, but it does not explain how you got all those REFRESERV values all around (e.g. the 1MB unused zvol has 3MB REFRESERV which even more bizzare). I can only imagine all you did was create zvols (virtual disks for your VM) and left them NOT thin provisioned and PVE maxed out the REFRESERV values.

Later on, I might try to setup a 6 vdev RAIDZ2 here over PVE GUI and see what it creates, perhaps file a bug report from there. Takeout for me would be to NOT use thick provisioning on ZFS. In terms of PVE GUI this means ticking extra THIN provision box when creating these zvols.

[1] https://forum.proxmox.com/threads/m...rt-passed-any-ideas.151260/page-3#post-685626
 
I'm pretty sure that zd0 1MB disk is used by Proxmox to provide EFI and its keys for the VM.
After it gets removed, the VM doesn't boot.

Proxmox 4.jpg
 
Last edited:
Alright, never mind, you got it!

https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_bios_and_uefi

Apologies, I do not really use OVMF so did not realise this. Alright then ... it is then all "good", as in it is "not used" by the VM, the VM does not see it, but you need it there for the emulation.

What is weird are the REFRESERV values and why you were getting IO buffer errors on this zvol even just because there was 0 AVAIL for the pool itself. So back to the original hypothesis still. :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!