Sudden datastore size error

frankz

Renowned Member
Nov 16, 2020
419
27
68
Hello everyone, it happened to me for the second time, that a datastore formatted in ZFS suddenly changed the size of the same. The practical example or rather the real situation of my datastore is a 1Tb disk. After a restart of the server no reported errors, but I realized that the size had been changed to 256 GB! I exported the pool, reformatted the disk and all resumed working in the correct disk size. What happened?
 
you need to give a bit more details about what is going on and what you expect to be going on

- pveversion -v
- storage.cfg
- what is your 'datastore'?
 
you need to give a bit more details about what is going on and what you expect to be going on

- pveversion -v
- storage.cfg
- what is your 'datastore'?
Hi Fabian and thank you for responding. As a straight in pecedenza, I can simply say that in the PBS a shared and functioning 1TB usb storage suddenly changed the size, as if it had become only 256 GB. That's all. I'm attaching the versions of PVE and PBS. Unfortunately, I was hurried and then recreated the datastore (fdisk and option G). However, if it is possible to obtain info from the ZFS logs, you can give me some indications I try.



Code:
proxmox-backup: 2.2-1 (running kernel: 5.15.35-2-pve)
proxmox-backup-server: 2.2.3-1 (running version: 2.2.3)
pve-kernel-5.15: 7.2-4
pve-kernel-helper: 7.2-4
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-4
pve-kernel-5.15.35-2-pve: 5.15.35-4
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.4.124-1-pve: 5.4.124-1
pve-kernel-5.4.65-1-pve: 5.4.65-1
ifupdown2: 3.1.0-1+pmx3
libjs-extjs: 7.0.0-1
proxmox-backup-docs: 2.2.3-1
proxmox-backup-client: 2.2.3-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.5.1
pve-xtermjs: 4.16.0-1
smartmontools: 7.2-pve3
zfsutils-linux: 2.1.4-pve1

Code:
proxmox-ve: 7.2-1 (running kernel: 5.15.35-2-pve)
pve-manager: 7.2-4 (running version: 7.2-4/ca9d43cc)
pve-kernel-5.15: 7.2-4
pve-kernel-helper: 7.2-4
pve-kernel-5.13: 7.1-9
pve-kernel-5.0: 6.0-11
pve-kernel-5.15.35-2-pve: 5.15.35-4
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.15.30-2-pve: 5.15.30-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-2
libpve-storage-perl: 7.2-4
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.3-1
proxmox-backup-file-restore: 2.2.3-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1
 
PBS won't ever change the size of physical disks, and it only allows formatting full, currently unused disks (and will always use the full disk for that as well). maybe your USB drive is fake/broken?
 
PBS won't ever change the size of physical disks, and it only allows formatting full, currently unused disks (and will always use the full disk for that as well). maybe your USB drive is fake/broken?
I don't know Fabian, to tell the truth every now and then from the logs I read a pending sector not a bad sector, so this error I think is generated by the latency of the PC dedicated to the PBS or perhaps the disk. But in ZFS history shouldn't I see what happened?
 
the ZFS history is stored on the pool, if you destroyed the pool nothing will be visible.
 
the ZFS history is stored on the pool, if you destroyed the pool nothing will be visible.
Ok, thank you, Anyway, if it happens again, it will be my concern to write a post about this anomaly. Thank you for your intervention.