Hi,
We are testing crash recovery of Proxmox 8
So we have a dedicated test server at OVH, for this issue there are two discs that are divided as follows
Here is how a default of our partition template looks like,
# Type Filesystem Mount Point Volume Name (LVM/ZFS) RAID Partition Size Used Space
1 Primary ZFS /boot - 1 2 x 1.0 GiB 2.0 GiB
2 Primary ZFS / - 1 2 x 20.0 GiB 40.0 GiB
3 Primary SWAP swap - - 2 x 1.0 GiB 2.0 GiB
4 Primary ZFS /var/lib/vz data 1 2 x 1.8 TiB 3.6 TiB
*
So we were testing the issue with blocked firewall of datacenter and wanted to unblock the PVE firewall service from starting as described here
https://forum.proxmox.com/threads/h...ter-lock-out-datacenter-level-firewall.60557/
with masking the service
The rescue setup is boot from OVH Rescue os Debian, we don't have the access to any KVM console, only acces files thru Debian accessing disks
So firstly we did boot with rescue OS then imported the pool zp0 that has a Proxmox, mounted it properly, masked the service and restart the server but it didn't boot, so afterwards we made a few attempts but they were also failure that the proxmox couldn't boot in Normal not rescue booting.
So then we set up again in clean Proxmox with identical partitions, of course after installation it was working perfectly, but again we tried just to put in the rescue environment and just mount the proxmax system files nothing changed.
Again booting in Normal procedure and it again failed, as I was telling before we don't have a KVM console so it is hard for now to diagnose when the booting stops.
Maybe you have some suggestions what to check or we are not doing something properly with importing the ZFS pool forcing it of course to import then maybe we have to rebuild something so it could boot again maybe it loses some data
*
from Rescue OS bash, Partitions IDs are random
>lsblk -f
sda
├─sda1 vfat EFI_SYSPART 1234-ABCD
├─sda2 zfs_member zp0 9876543212345678901 # sdb2 (ZFS mirror)
├─sda3 swap swap-sda3 a1b2c3d4-5678-9101-1121-314151617181
├─sda4 zfs_member data 5647382910123456789 # sdb4 (ZFS mirror)
└─sda5 iso9660 config-2 2025-01-01-00-00-00-00
sdb
├─sdb1 vfat EFI_SYSPART ABCD-1234
├─sdb2 zfs_member zp0 9876543212345678901 # sda2 (ZFS mirror)
├─sdb3 swap swap-sdb3 1a2b3c4d-5678-9101-1121-314151617182
└─sdb4 zfs_member data 5647382910123456789 # sda4 (ZFS mirror)
We are testing crash recovery of Proxmox 8
So we have a dedicated test server at OVH, for this issue there are two discs that are divided as follows
Here is how a default of our partition template looks like,
# Type Filesystem Mount Point Volume Name (LVM/ZFS) RAID Partition Size Used Space
1 Primary ZFS /boot - 1 2 x 1.0 GiB 2.0 GiB
2 Primary ZFS / - 1 2 x 20.0 GiB 40.0 GiB
3 Primary SWAP swap - - 2 x 1.0 GiB 2.0 GiB
4 Primary ZFS /var/lib/vz data 1 2 x 1.8 TiB 3.6 TiB
*
So we were testing the issue with blocked firewall of datacenter and wanted to unblock the PVE firewall service from starting as described here
https://forum.proxmox.com/threads/h...ter-lock-out-datacenter-level-firewall.60557/
with masking the service
The rescue setup is boot from OVH Rescue os Debian, we don't have the access to any KVM console, only acces files thru Debian accessing disks
So firstly we did boot with rescue OS then imported the pool zp0 that has a Proxmox, mounted it properly, masked the service and restart the server but it didn't boot, so afterwards we made a few attempts but they were also failure that the proxmox couldn't boot in Normal not rescue booting.
So then we set up again in clean Proxmox with identical partitions, of course after installation it was working perfectly, but again we tried just to put in the rescue environment and just mount the proxmax system files nothing changed.
Again booting in Normal procedure and it again failed, as I was telling before we don't have a KVM console so it is hard for now to diagnose when the booting stops.
Maybe you have some suggestions what to check or we are not doing something properly with importing the ZFS pool forcing it of course to import then maybe we have to rebuild something so it could boot again maybe it loses some data
*
from Rescue OS bash, Partitions IDs are random
>lsblk -f
sda
├─sda1 vfat EFI_SYSPART 1234-ABCD
├─sda2 zfs_member zp0 9876543212345678901 # sdb2 (ZFS mirror)
├─sda3 swap swap-sda3 a1b2c3d4-5678-9101-1121-314151617181
├─sda4 zfs_member data 5647382910123456789 # sdb4 (ZFS mirror)
└─sda5 iso9660 config-2 2025-01-01-00-00-00-00
sdb
├─sdb1 vfat EFI_SYSPART ABCD-1234
├─sdb2 zfs_member zp0 9876543212345678901 # sda2 (ZFS mirror)
├─sdb3 swap swap-sdb3 1a2b3c4d-5678-9101-1121-314151617182
└─sdb4 zfs_member data 5647382910123456789 # sda4 (ZFS mirror)