Hello, after a crash and reboot my server can't see all volumes on /dev/zvol except rpool.
If i try to create a new zfs volume and import it using web panel... same result none on /dev/zpool.
So i can't start my LXC vps.
( i have only one LXC VPS hosted )
I have an importatnt volume (Raid1) named "storage".
i can see it using zfs list:
--------------------------------------------------------
root@lazara:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 9.76G 205G 104K /rpool
rpool/ROOT 1.25G 205G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.25G 205G 1.25G /
rpool/data 96K 205G 96K /rpool/data
rpool/swap 8.50G 214G 56K -
storage 8.04T 759G 96K /storage
storage/subvol-100-disk-1 8.04T 759G 6.29T /storage/subvol-100-disk-1
--------------------------------------------------------------------------------
But nothing on /dev/zvol:
root@lazara:~# ls -la /dev/zvol
total 0
drwxr-xr-x 3 root root 60 Apr 20 23:07 .
drwxr-xr-x 21 root root 4900 Apr 20 23:12 ..
drwxr-xr-x 2 root root 60 Apr 20 23:07 rpool
--------------------------------------------------------------------------------
root@lazara:~# zpool status -v
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
errors: No known data errors
pool: storage
state: ONLINE
scan: scrub canceled on Thu Apr 19 23:46:01 2018
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5000cca26bc8a788 ONLINE 0 0 0
wwn-0x5000cca26bc90800 ONLINE 0 0 0
errors: No known data errors
--------------------------------------------------------------------------------
root@lazara:~# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 222G 1.25G 221G - 0% 0% 1.00x ONLINE -
storage 9.06T 8.04T 1.02T - 17% 88% 1.00x ONLINE -
--------------------------------------------------------------------------------
Previusly, we have insered 7 new HD and after reboot i see different letters on my drive:
sdc and sdd as descriptor of the raid doing zpool status, instead now i have the desc. by-id.
Using web interface now sdc and sdd are sdd and sde.
-----------------------------------------------------------------------------------------------
pveversion -V
proxmox-ve: 5.1-42 (running kernel: 4.13.16-2-pve)
pve-manager: 5.1-51 (running version: 5.1-51/96be5354)
pve-kernel-4.13: 5.1-44
pve-kernel-4.13.16-2-pve: 4.13.16-47
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve4
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-30
libpve-guest-common-perl: 2.0-14
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-18
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-2
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-15
pve-cluster: 5.0-25
pve-container: 2.0-21
pve-docs: 5.1-17
pve-firewall: 3.0-8
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-4
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-2
qemu-server: 5.0-25
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.7-pve1~bpo9
-----------------------------------------------------------------------
i have also open a ticket: Ticket ID: KBI-845-96797
THANKS
If i try to create a new zfs volume and import it using web panel... same result none on /dev/zpool.
So i can't start my LXC vps.
( i have only one LXC VPS hosted )
I have an importatnt volume (Raid1) named "storage".
i can see it using zfs list:
--------------------------------------------------------
root@lazara:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 9.76G 205G 104K /rpool
rpool/ROOT 1.25G 205G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.25G 205G 1.25G /
rpool/data 96K 205G 96K /rpool/data
rpool/swap 8.50G 214G 56K -
storage 8.04T 759G 96K /storage
storage/subvol-100-disk-1 8.04T 759G 6.29T /storage/subvol-100-disk-1
--------------------------------------------------------------------------------
But nothing on /dev/zvol:
root@lazara:~# ls -la /dev/zvol
total 0
drwxr-xr-x 3 root root 60 Apr 20 23:07 .
drwxr-xr-x 21 root root 4900 Apr 20 23:12 ..
drwxr-xr-x 2 root root 60 Apr 20 23:07 rpool
--------------------------------------------------------------------------------
root@lazara:~# zpool status -v
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
errors: No known data errors
pool: storage
state: ONLINE
scan: scrub canceled on Thu Apr 19 23:46:01 2018
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5000cca26bc8a788 ONLINE 0 0 0
wwn-0x5000cca26bc90800 ONLINE 0 0 0
errors: No known data errors
--------------------------------------------------------------------------------
root@lazara:~# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 222G 1.25G 221G - 0% 0% 1.00x ONLINE -
storage 9.06T 8.04T 1.02T - 17% 88% 1.00x ONLINE -
--------------------------------------------------------------------------------
Previusly, we have insered 7 new HD and after reboot i see different letters on my drive:
sdc and sdd as descriptor of the raid doing zpool status, instead now i have the desc. by-id.
Using web interface now sdc and sdd are sdd and sde.
-----------------------------------------------------------------------------------------------
pveversion -V
proxmox-ve: 5.1-42 (running kernel: 4.13.16-2-pve)
pve-manager: 5.1-51 (running version: 5.1-51/96be5354)
pve-kernel-4.13: 5.1-44
pve-kernel-4.13.16-2-pve: 4.13.16-47
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve4
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-30
libpve-guest-common-perl: 2.0-14
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-18
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-2
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-15
pve-cluster: 5.0-25
pve-container: 2.0-21
pve-docs: 5.1-17
pve-firewall: 3.0-8
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-4
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-2
qemu-server: 5.0-25
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.7-pve1~bpo9
-----------------------------------------------------------------------
i have also open a ticket: Ticket ID: KBI-845-96797
THANKS