Hello,
after a crash and reboot i can't see my "storage" zfs volume under /dev/zvol
and at boot promt i see:
a start job is running for udev wait for complete device initialization ...
ls -la /dev/zvol
total 0
drwxr-xr-x 3 root root 60 Apr 20 08:54 .
drwxr-xr-x 21 root root 5280 Apr 20 08:56 ..
drwxr-xr-x 2 root root 60 Apr 20 08:54 rpool
I can see it but hard disk naming has changed:
pool: storage
state: ONLINE
scan: scrub canceled on Thu Apr 19 23:46:01 2018
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5000cca26bc8a788 ONLINE 0 0 0
wwn-0x5000cca26bc90800 ONLINE 0 0 0
WAS sdc / sdd
-------------------------------------------------------------------------------------------------------
zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 10.4G 205G 104K /rpool
rpool/ROOT 1.85G 205G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.85G 205G 1.85G /
rpool/data 96K 205G 96K /rpool/data
rpool/swap 8.50G 209G 4.44G -
storage 8.04T 759G 96K /storage
storage/subvol-100-disk-1 8.04T 759G 6.29T /storage/subvol-100-disk-1
-------------------------------------------------------------------------------------------------------
zpool import -c /etc/zfs/zpool.cache -N storage
cannot import 'storage': a pool with that name already exists
use the form 'zpool import <pool | id> <newpool>' to give it a new name
-------------------------------------------------------------------------------------------------------
pveversion -V
proxmox-ve: 5.1-42 (running kernel: 4.15.15-1-pve)
pve-manager: 5.1-51 (running version: 5.1-51/96be5354)
pve-kernel-4.13: 5.1-44
pve-kernel-4.15.15-1-pve: 4.15.15-6
pve-kernel-4.13.16-2-pve: 4.13.16-47
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve4
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-30
libpve-guest-common-perl: 2.0-14
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-18
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-2
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-15
pve-cluster: 5.0-25
pve-container: 2.0-21
pve-docs: 5.1-17
pve-firewall: 3.0-8
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-4
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-2
pve-zsync: 1.6-15
qemu-server: 5.0-25
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.7-pve1~bpo9
------------------------------------------------------------------
Hav i to delete /etc/zfs/zpool.cache ? .. some ideas ?
THANSK!
after a crash and reboot i can't see my "storage" zfs volume under /dev/zvol
and at boot promt i see:
a start job is running for udev wait for complete device initialization ...
ls -la /dev/zvol
total 0
drwxr-xr-x 3 root root 60 Apr 20 08:54 .
drwxr-xr-x 21 root root 5280 Apr 20 08:56 ..
drwxr-xr-x 2 root root 60 Apr 20 08:54 rpool
I can see it but hard disk naming has changed:
pool: storage
state: ONLINE
scan: scrub canceled on Thu Apr 19 23:46:01 2018
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5000cca26bc8a788 ONLINE 0 0 0
wwn-0x5000cca26bc90800 ONLINE 0 0 0
WAS sdc / sdd
-------------------------------------------------------------------------------------------------------
zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 10.4G 205G 104K /rpool
rpool/ROOT 1.85G 205G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.85G 205G 1.85G /
rpool/data 96K 205G 96K /rpool/data
rpool/swap 8.50G 209G 4.44G -
storage 8.04T 759G 96K /storage
storage/subvol-100-disk-1 8.04T 759G 6.29T /storage/subvol-100-disk-1
-------------------------------------------------------------------------------------------------------
zpool import -c /etc/zfs/zpool.cache -N storage
cannot import 'storage': a pool with that name already exists
use the form 'zpool import <pool | id> <newpool>' to give it a new name
-------------------------------------------------------------------------------------------------------
pveversion -V
proxmox-ve: 5.1-42 (running kernel: 4.15.15-1-pve)
pve-manager: 5.1-51 (running version: 5.1-51/96be5354)
pve-kernel-4.13: 5.1-44
pve-kernel-4.15.15-1-pve: 4.15.15-6
pve-kernel-4.13.16-2-pve: 4.13.16-47
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve4
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-30
libpve-guest-common-perl: 2.0-14
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-18
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-2
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-15
pve-cluster: 5.0-25
pve-container: 2.0-21
pve-docs: 5.1-17
pve-firewall: 3.0-8
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-4
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-2
pve-zsync: 1.6-15
qemu-server: 5.0-25
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.7-pve1~bpo9
------------------------------------------------------------------
Hav i to delete /etc/zfs/zpool.cache ? .. some ideas ?
THANSK!
Last edited: