I made a mistake and added two 6TB drive as their own vdevs when I intended to add both disks mirrored as one vdev. When I tried to correct my mistake by removing the drives I get the following error:
cannot remove /dev/disk/by-id/wwn-0x5000c500********: invalid config; all top-level vdevs must have the same sector size and not be raidz.
I am at a loss as to what is happening. As far as I can tell the sectors are fine. I could see the "not be raidz" portion kicking in except I can't even remove a single one of the drives. I'd like to remove both, but it won't even let me remove either of the single 6TB drives. Any ideas on how I get out of this with my data intact are welcome.
Here is what I've tried so far (in order):
My pveversion:
My zpool get all Oceania | grep feature@
And finally: zdb -e -C Oceania
cannot remove /dev/disk/by-id/wwn-0x5000c500********: invalid config; all top-level vdevs must have the same sector size and not be raidz.
I am at a loss as to what is happening. As far as I can tell the sectors are fine. I could see the "not be raidz" portion kicking in except I can't even remove a single one of the drives. I'd like to remove both, but it won't even let me remove either of the single 6TB drives. Any ideas on how I get out of this with my data intact are welcome.
Here is what I've tried so far (in order):
- Confirmed ashift=12 for all relevant vdevs.
- Confirmed device_removal feature is enabled.
- Pool upgraded, exported, re-imported.
- Server rebooted.
- In desperation, Attempted to add a 512b drive with ashift=9. Failed due to physical sector size mismatch, it just yelled at me and called my mother names for birthing me.
Bash:
zpool status Oceania
pool: Oceania
state: ONLINE
scan: scrub repaired 0B in 08:22:11 with 0 errors on Sun May 11 08:46:16 2025
config:
NAME STATE READ WRITE CKSUM
Oceania ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
scsi-35000c500625743c3 ONLINE 0 0 0
wwn-0x5000c5009927d287 ONLINE 0 0 0
scsi-35000c50083a97dcb ONLINE 0 0 0
wwn-0x5000c5008645f01b ONLINE 0 0 0
scsi-35000c50085b6864b ONLINE 0 0 0
scsi-35000c50062570d77 ONLINE 0 0 0
wwn-0x5000c5008645edbf ONLINE 0 0 0
wwn-0x5000c500992859ef ONLINE 0 0 0
errors: No known data errors
My pveversion:
Bash:
proxmox-ve: 8.4.0 (running kernel: 6.8.12-10-pve)
pve-manager: 8.4.1 (running version: 8.4.1/2a5fa54a8503f96d)
proxmox-kernel-helper: 8.1.1
pve-kernel-5.15: 7.4-13
proxmox-kernel-6.8.12-10-pve-signed: 6.8.12-10
proxmox-kernel-6.8: 6.8.12-10
proxmox-kernel-6.8.12-1-pve-signed: 6.8.12-1
proxmox-kernel-6.8.8-4-pve-signed: 6.8.8-4
proxmox-kernel-6.8.8-3-pve-signed: 6.8.8-3
proxmox-kernel-6.8.8-2-pve-signed: 6.8.8-2
proxmox-kernel-6.8.8-1-pve-signed: 6.8.8-1
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
pve-kernel-5.4: 6.4-19
pve-kernel-5.15.152-1-pve: 5.15.152-1
pve-kernel-5.15.149-1-pve: 5.15.149-1
pve-kernel-5.15.116-1-pve: 5.15.116-1
pve-kernel-5.15.108-1-pve: 5.15.108-2
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.85-1-pve: 5.15.85-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.4.195-1-pve: 5.4.195-1
pve-kernel-5.4.189-2-pve: 5.4.189-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 16.2.15+ds-0+deb12u1
corosync: 3.1.9-pve1
criu: 3.17.1-2+deb12u1
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.30-pve2
libproxmox-acme-perl: 1.6.0
libproxmox-backup-qemu0: 1.5.1
libproxmox-rs-perl: 0.3.5
libpve-access-control: 8.2.2
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.1.0
libpve-cluster-perl: 8.1.0
libpve-common-perl: 8.3.1
libpve-guest-common-perl: 5.2.2
libpve-http-server-perl: 5.2.2
libpve-network-perl: 0.11.2
libpve-rs-perl: 0.9.4
libpve-storage-perl: 8.3.6
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.6.0-2
proxmox-backup-client: 3.4.1-1
proxmox-backup-file-restore: 3.4.1-1
proxmox-firewall: 0.7.1
proxmox-kernel-helper: 8.1.1
proxmox-mail-forward: 0.3.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.10
pve-cluster: 8.1.0
pve-container: 5.2.6
pve-docs: 8.4.0
pve-edk2-firmware: 4.2025.02-3
pve-esxi-import-tools: 0.7.4
pve-firewall: 5.1.1
pve-firmware: 3.15-3
pve-ha-manager: 4.0.7
pve-i18n: 3.4.2
pve-qemu-kvm: 9.2.0-5
pve-xtermjs: 5.5.0-2
qemu-server: 8.3.12
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.7-pve2
My zpool get all Oceania | grep feature@
Bash:
Oceania feature@async_destroy enabled local
Oceania feature@empty_bpobj active local
Oceania feature@lz4_compress active local
Oceania feature@multi_vdev_crash_dump enabled local
Oceania feature@spacemap_histogram active local
Oceania feature@enabled_txg active local
Oceania feature@hole_birth active local
Oceania feature@extensible_dataset active local
Oceania feature@embedded_data active local
Oceania feature@bookmarks enabled local
Oceania feature@filesystem_limits enabled local
Oceania feature@large_blocks enabled local
Oceania feature@large_dnode enabled local
Oceania feature@sha512 enabled local
Oceania feature@skein enabled local
Oceania feature@edonr enabled local
Oceania feature@userobj_accounting active local
Oceania feature@encryption enabled local
Oceania feature@project_quota active local
Oceania feature@device_removal enabled local
Oceania feature@obsolete_counts enabled local
Oceania feature@zpool_checkpoint enabled local
Oceania feature@spacemap_v2 active local
Oceania feature@allocation_classes enabled local
Oceania feature@resilver_defer enabled local
Oceania feature@bookmark_v2 enabled local
Oceania feature@redaction_bookmarks enabled local
Oceania feature@redacted_datasets enabled local
Oceania feature@bookmark_written enabled local
Oceania feature@log_spacemap active local
Oceania feature@livelist enabled local
Oceania feature@device_rebuild enabled local
Oceania feature@zstd_compress enabled local
Oceania feature@draid enabled local
Oceania feature@zilsaxattr active local
Oceania feature@head_errlog active local
Oceania feature@blake3 enabled local
Oceania feature@block_cloning enabled local
Oceania feature@vdev_zaps_v2 active local
And finally: zdb -e -C Oceania
Bash:
MOS Configuration:
version: 5000
name: 'Oceania'
state: 0
txg: 26622528
pool_guid: 483873003047153866
errata: 0
hostid: 1050758217
hostname: 'Dome'
com.delphix:has_per_vdev_zaps
vdev_children: 3
vdev_tree:
type: 'root'
id: 0
guid: 483873003047153866
create_txg: 4
com.klarasystems:vdev_zap_root: 196
children[0]:
type: 'raidz'
id: 0
guid: 1373771604453732184
nparity: 2
metaslab_array: 256
metaslab_shift: 34
ashift: 12
asize: 24004631986176
is_log: 0
create_txg: 4
com.delphix:vdev_zap_top: 129
children[0]:
type: 'disk'
id: 0
guid: 3896591885176642856
path: '/dev/disk/by-id/scsi-35000c500625743c3-part1'
devid: 'scsi-35000c500625743c3-part1'
phys_path: 'pci-0000:02:00.0-scsi-0:0:3:0'
whole_disk: 1
DTL: 286
create_txg: 4
com.delphix:vdev_zap_leaf: 130
children[1]:
type: 'disk'
id: 1
guid: 13495598594336016122
path: '/dev/disk/by-id/wwn-0x5000c5009927d287-part1'
devid: 'scsi-35000c5009927d287-part1'
phys_path: 'pci-0000:02:00.0-scsi-0:0:4:0'
whole_disk: 1
DTL: 99
create_txg: 4
com.delphix:vdev_zap_leaf: 92
children[2]:
type: 'disk'
id: 2
guid: 10614957903682690466
path: '/dev/disk/by-id/scsi-35000c50083a97dcb-part1'
devid: 'scsi-35000c50083a97dcb-part1'
phys_path: 'pci-0000:02:00.0-scsi-0:0:5:0'
whole_disk: 1
DTL: 284
create_txg: 4
com.delphix:vdev_zap_leaf: 132
children[3]:
type: 'disk'
id: 3
guid: 2401746666217853313
path: '/dev/disk/by-id/wwn-0x5000c5008645f01b-part1'
devid: 'scsi-35000c5008645f01b-part1'
phys_path: 'pci-0000:02:00.0-scsi-0:0:6:0'
whole_disk: 1
DTL: 81
create_txg: 4
com.delphix:vdev_zap_leaf: 79
children[4]:
type: 'disk'
id: 4
guid: 59683984656377264
path: '/dev/disk/by-id/scsi-35000c50085b6864b-part1'
devid: 'scsi-35000c50085b6864b-part1'
phys_path: 'pci-0000:02:00.0-scsi-0:0:8:0'
whole_disk: 1
DTL: 282
create_txg: 4
com.delphix:vdev_zap_leaf: 134
children[5]:
type: 'disk'
id: 5
guid: 7200529984077618175
path: '/dev/disk/by-id/scsi-35000c50062570d77-part1'
devid: 'scsi-35000c50062570d77-part1'
phys_path: 'pci-0000:02:00.0-scsi-0:0:9:0'
whole_disk: 1
DTL: 281
create_txg: 4
com.delphix:vdev_zap_leaf: 135
children[1]:
type: 'disk'
id: 1
guid: 3200183702739423205
path: '/dev/disk/by-id/wwn-0x5000c5008645edbf-part1'
devid: 'scsi-35000c5008645edbf-part1'
phys_path: 'pci-0000:02:00.0-scsi-0:0:1:0'
whole_disk: 1
metaslab_array: 1291
metaslab_shift: 34
ashift: 12
asize: 6001160355840
is_log: 0
create_txg: 26621341
com.delphix:vdev_zap_leaf: 1157
com.delphix:vdev_zap_top: 1161
children[2]:
type: 'disk'
id: 2
guid: 7708736973149512418
path: '/dev/disk/by-id/wwn-0x5000c500992859ef-part1'
devid: 'scsi-35000c500992859ef-part1'
phys_path: 'pci-0000:02:00.0-scsi-0:0:0:0'
whole_disk: 1
metaslab_array: 1682
metaslab_shift: 34
ashift: 12
asize: 6001160355840
is_log: 0
create_txg: 26621646
com.delphix:vdev_zap_leaf: 1560
com.delphix:vdev_zap_top: 1561
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
com.klarasystems:vdev_zaps_v2
Last edited: