Volume disappeared after crash and reboot

yena

Renowned Member
Nov 18, 2011
378
5
83
Hello,
after a crash and reboot i can't see my "storage" zfs volume under /dev/zvol
and at boot promt i see:
a start job is running for udev wait for complete device initialization ...

ls -la /dev/zvol
total 0
drwxr-xr-x 3 root root 60 Apr 20 08:54 .
drwxr-xr-x 21 root root 5280 Apr 20 08:56 ..
drwxr-xr-x 2 root root 60 Apr 20 08:54 rpool

I can see it but hard disk naming has changed:

pool: storage
state: ONLINE
scan: scrub canceled on Thu Apr 19 23:46:01 2018
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5000cca26bc8a788 ONLINE 0 0 0
wwn-0x5000cca26bc90800 ONLINE 0 0 0

WAS sdc / sdd
-------------------------------------------------------------------------------------------------------

zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 10.4G 205G 104K /rpool
rpool/ROOT 1.85G 205G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.85G 205G 1.85G /
rpool/data 96K 205G 96K /rpool/data
rpool/swap 8.50G 209G 4.44G -
storage 8.04T 759G 96K /storage
storage/subvol-100-disk-1 8.04T 759G 6.29T /storage/subvol-100-disk-1

-------------------------------------------------------------------------------------------------------

zpool import -c /etc/zfs/zpool.cache -N storage
cannot import 'storage': a pool with that name already exists
use the form 'zpool import <pool | id> <newpool>' to give it a new name

-------------------------------------------------------------------------------------------------------
pveversion -V
proxmox-ve: 5.1-42 (running kernel: 4.15.15-1-pve)
pve-manager: 5.1-51 (running version: 5.1-51/96be5354)
pve-kernel-4.13: 5.1-44
pve-kernel-4.15.15-1-pve: 4.15.15-6
pve-kernel-4.13.16-2-pve: 4.13.16-47
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve4
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-30
libpve-guest-common-perl: 2.0-14
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-18
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-2
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-15
pve-cluster: 5.0-25
pve-container: 2.0-21
pve-docs: 5.1-17
pve-firewall: 3.0-8
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-4
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-2
pve-zsync: 1.6-15
qemu-server: 5.0-25
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.7-pve1~bpo9
------------------------------------------------------------------

Hav i to delete /etc/zfs/zpool.cache ? .. some ideas ?

THANSK!
 
Last edited:
I have finded the cause, we have insered 7 new Hard disk, hot swap,
after the crash, the server reboot and re-map every HD!! so every entry like sdc/sdd/sde are changed !

zpool status -v
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0h1m with 0 errors on Sun Apr 8 00:25:18 2018
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0

errors: No known data errors

pool: storage
state: ONLINE
scan: scrub canceled on Thu Apr 19 23:46:01 2018
config:

NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5000cca26bc8a788 ONLINE 0 0 0
wwn-0x5000cca26bc90800 ONLINE 0 0 0


errors: No known data errors

Infact i see wwn-0x5000cca26bc8a788 and wwn-0x5000cca26bc90800 not sdc sdd !

if i export and than import the vol storage i can access the content but i still can see the volume on /dev/zvol

Why proxmox re-map hd device ?
Can i re assign the old names ?

------------------------------------------------------------------------------------------------------------------
blkid
/dev/sda2: LABEL="rpool" UUID="14643568233080075857" UUID_SUB="14927214585550116699" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="f752672e-d6ee-432c-b106-76026e0b1838"
/dev/sdb2: LABEL="rpool" UUID="14643568233080075857" UUID_SUB="10388335358908211430" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="d1b49aa4-abfe-4099-ad87-07c504030f6e"
/dev/sde1: LABEL="storage" UUID="11726730913690512857" UUID_SUB="13614383425813560384" TYPE="zfs_member" PARTLABEL="zfs-163a8f544ac91017" PARTUUID="8175d2c8-665d-0d44-b8ff-efc4dc8a07ec"
/dev/sdd1: LABEL="storage" UUID="11726730913690512857" UUID_SUB="11101969648437288177" TYPE="zfs_member" PARTLABEL="zfs-d756de1bae55e0e3" PARTUUID="c4a17d43-130b-654f-8110-9c0eff68534a"
/dev/sdg1: LABEL="backup" UUID="4854663689251531744" UUID_SUB="16969322726204881097" TYPE="zfs_member" PARTLABEL="zfs-79edf2f0937a3210" PARTUUID="765a9187-1d49-794e-8d1a-8802b60c8f02"
/dev/sdm1: UUID="5cc6e0d0-6ff9-4603-93cf-ae623be3f7ab" TYPE="ext4" PTTYPE="dos" PARTUUID="67005421-01"
/dev/sdm2: UUID="fbbb87db-9e0e-4d50-8948-d3dee691f766" TYPE="ext4" PARTUUID="67005421-02"
/dev/sdn1: UUID="5cc6e0d0-6ff9-4603-93cf-ae623be3f7ab" TYPE="ext4" PTTYPE="dos" PARTUUID="67005421-01"
/dev/sdn2: UUID="fbbb87db-9e0e-4d50-8948-d3dee691f766" TYPE="ext4" PARTUUID="67005421-02"
/dev/sdo1: UUID="5cc6e0d0-6ff9-4603-93cf-ae623be3f7ab" TYPE="ext4" PTTYPE="dos" PARTUUID="67005421-01"
/dev/sdo2: UUID="fbbb87db-9e0e-4d50-8948-d3dee691f766" TYPE="ext4" PARTUUID="67005421-02"
/dev/sdp1: UUID="5cc6e0d0-6ff9-4603-93cf-ae623be3f7ab" TYPE="ext4" PTTYPE="dos" PARTUUID="67005421-01"
/dev/sdp2: UUID="fbbb87db-9e0e-4d50-8948-d3dee691f766" TYPE="ext4" PARTUUID="67005421-02"
/dev/zd0: UUID="0778213e-cd23-4494-93e5-be2ad2fc2ced" TYPE="swap"
/dev/sda1: PARTUUID="de5be1a5-35e2-4600-b5cf-117cb6fd9d63"
/dev/sda9: PARTUUID="8da984ce-7008-4a3b-bac2-f0e7c1f0fd3a"
/dev/sdb1: PARTUUID="6adeff74-b7a4-40c2-87f3-a14c9b227305"
/dev/sdb9: PARTUUID="63e28043-a127-41eb-bf89-38341f4f0c15"
/dev/sde9: PARTUUID="24818d87-f86b-4d47-a98d-11c5118b35ee"
/dev/sdd9: PARTUUID="5504f7e8-1187-214c-adce-6ef596754abf"
/dev/sdg9: PARTUUID="a37a268c-247e-1848-9183-e93a17e5c297"
-------------------------------------------------------------------------------------------------------------------------

ls -la /sys/devices/virtual/block/
total 0
drwxr-xr-x 11 root root 0 Apr 20 11:37 .
drwxr-xr-x 21 root root 0 Apr 20 09:37 ..
drwxr-xr-x 9 root root 0 Apr 20 09:37 loop0
drwxr-xr-x 9 root root 0 Apr 20 09:37 loop1
drwxr-xr-x 9 root root 0 Apr 20 09:37 loop2
drwxr-xr-x 9 root root 0 Apr 20 09:37 loop3
drwxr-xr-x 9 root root 0 Apr 20 09:37 loop4
drwxr-xr-x 9 root root 0 Apr 20 09:37 loop5
drwxr-xr-x 9 root root 0 Apr 20 09:37 loop6
drwxr-xr-x 9 root root 0 Apr 20 09:37 loop7
drwxr-xr-x 8 root root 0 Apr 20 09:38 zd0

-----------------------------------------------------------------------------------------------------------------------

Thanks!!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!