Hi,
I am a long term Proxmox user (> 5 years) and currently running on Proxmox 6.0-11 with some 4 Debian and Ubultu LXCs and one Windows 10 VM running rock solid in a singly server setup.
Disk setup is two Samsung Datacenter Class SSDs for the mirrored rpool (ZFS).
4x Seagate ST4000 for data in a mirrored setup (Adaptec PCI-E SAS controller).
1x Seagate 3TB Drive USB 3 connected for Backup.
I updated to 6.1-5 (without warnings or alike) and realized after some 10 Minutes after reboot that the server was unresponsive.
The WebUI at that time changed and showed only grey ? status of all Containers and VMs.
On the console I realized that the 4 Seagate disks / zfs went into a (zpool status -v) "STATE: unavailable" --> resilvering, with randomly changing drives being unavailable.
Importing the USB pool showed errors as well. The rpool was totally unaffected, running normal ...
I immediately set the rpool back to the last snapshot, which was 6.0-11 (Kernel 5.0.21-4-pve) and everything instantly after reboot went back to normal operation. Unfortunately I did not copy the logs before going back ...
Nevertheless ... has anyone had an equal experience? I read something about the 5.3 kernel to have issues with USB connected disks?
Are there any changes between these two versions that could result in such weird behavior?
Regards and "Happy New Year"!
Rainer
I am a long term Proxmox user (> 5 years) and currently running on Proxmox 6.0-11 with some 4 Debian and Ubultu LXCs and one Windows 10 VM running rock solid in a singly server setup.
Disk setup is two Samsung Datacenter Class SSDs for the mirrored rpool (ZFS).
4x Seagate ST4000 for data in a mirrored setup (Adaptec PCI-E SAS controller).
1x Seagate 3TB Drive USB 3 connected for Backup.
I updated to 6.1-5 (without warnings or alike) and realized after some 10 Minutes after reboot that the server was unresponsive.
The WebUI at that time changed and showed only grey ? status of all Containers and VMs.
On the console I realized that the 4 Seagate disks / zfs went into a (zpool status -v) "STATE: unavailable" --> resilvering, with randomly changing drives being unavailable.
Importing the USB pool showed errors as well. The rpool was totally unaffected, running normal ...
I immediately set the rpool back to the last snapshot, which was 6.0-11 (Kernel 5.0.21-4-pve) and everything instantly after reboot went back to normal operation. Unfortunately I did not copy the logs before going back ...
Nevertheless ... has anyone had an equal experience? I read something about the 5.3 kernel to have issues with USB connected disks?
Are there any changes between these two versions that could result in such weird behavior?
Regards and "Happy New Year"!
Rainer