Ok, I know this is a long read but I guess it could be interesting knowing the background.
I'm running Proxmox with a with a few VMs and Proxmox is running on a NVMe disk and the VM's are running from a ZFS pool consisting of 2 x 3TB mirrored disks. I also have a 1TB disk that one of the VM's uses for storage ("BlueIrisData"). As the 1 TB disk showed SMART errors I decided to replace it with a new 1TB disk. I mounted the new disk, edited
When I boot I get plenty of error messages and Proxmox enters emergency mode. One of the discs in the zfspool sounds horrible (loud ticking noices) and the only way to get out of emergency mode is to comment out the entered info in
These are my discs and partitions.
sda1 is possible to manually mount through cli. The ZFS pool however is lost.
I also tested
I'm running Proxmox with a with a few VMs and Proxmox is running on a NVMe disk and the VM's are running from a ZFS pool consisting of 2 x 3TB mirrored disks. I also have a 1TB disk that one of the VM's uses for storage ("BlueIrisData"). As the 1 TB disk showed SMART errors I decided to replace it with a new 1TB disk. I mounted the new disk, edited
fstab
and partitioned/formated/labeled the disk (same as before) and made it available as storage in the GUI and assigned it as a disc to the VM. I then rebooted and there is where the problem started.When I boot I get plenty of error messages and Proxmox enters emergency mode. One of the discs in the zfspool sounds horrible (loud ticking noices) and the only way to get out of emergency mode is to comment out the entered info in
fstab
regarding the new 1TB disc. But then I have no zfspool ("ZFSDrives") and no "BlueIrisData" drive. Proxmox can't find them in the GUI and times out looking for them.These are my discs and partitions.
root@proxmox:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
└─sda1 8:1 0 931.5G 0 part
sdb 8:16 0 2.7T 0 disk
├─sdb1 8:17 0 2.7T 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 2.7T 0 disk
├─sdc1 8:33 0 2.7T 0 part
└─sdc9 8:41 0 8M 0 part
sr0 11:0 1 1024M 0 rom
nvme0n1 259:0 0 119.2G 0 disk
├─nvme0n1p1 259:1 0 1007K 0 part
├─nvme0n1p2 259:2 0 512M 0 part /boot/efi
└─nvme0n1p3 259:3 0 118.7G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 29.5G 0 lvm /
├─pve-data_tmeta 253:2 0 1G 0 lvm
│ └─pve-data-tpool 253:4 0 64.5G 0 lvm
│ └─pve-data 253:5 0 64.5G 1 lvm
└─pve-data_tdata 253:3 0 64.5G 0 lvm
└─pve-data-tpool 253:4 0 64.5G 0 lvm
└─pve-data 253:5 0 64.5G 1 lvm
sda1 is possible to manually mount through cli. The ZFS pool however is lost.
zpool import
results in the following
root@proxmox:~# zpool import
no pools available to import
zpool import ZFSDrives
renders after a minutes wait
root@proxmox:~# zpool import ZFSDrives
cannot import 'ZFSDrives': one or more devices is currently unavailable
I also tested
root@proxmox:~# zpool import ZFSDrives -f
with the same resultzpool status -v
was next
root@proxmox:~# zpool status -v
no pools available
zdb -l /dev/sdb1
renders
root@proxmox:~# zdb -l /dev/sdb1
------------------------------------
LABEL 0
------------------------------------
version: 5000
name: 'ZFSDrives'
state: 0
txg: 3240347
pool_guid: 171953915263981592
errata: 0
hostid: 3243471785
hostname: 'proxmox'
top_guid: 10039642933645888486
guid: 13533403846860154412
vdev_children: 1
vdev_tree:
type: 'mirror'
id: 0
guid: 10039642933645888486
metaslab_array: 132
metaslab_shift: 34
ashift: 12
asize: 3000578342912
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 13533403846860154412
path: '/dev/disk/by-id/ata-ST3000VN000-1H4167_Z300L7WE-part1'
devid: 'ata-ST3000VN000-1H4167_Z300L7WE-part1'
phys_path: 'pci-0000:00:17.0-ata-2.0'
whole_disk: 1
DTL: 788
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 5455137475863193922
path: '/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N0E5L7P3-part1'
devid: 'ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N0E5L7P3-part1'
phys_path: 'pci-0000:00:17.0-ata-4.0'
whole_disk: 1
DTL: 446
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
labels = 0 1 2 3
Last edited: