(Bug?) Proxmox VE 6.0-4 - Backup Storage on ZFS

TJ Zimmerman

Active Member
Apr 8, 2018
8
0
41
31
Seattle
tjzimmerman.com
Hello, I recently upgraded Proxmox from 5.3 to 6.0 and the process went very smoothly. So first of all I wanted to thank the team for making the upgrade so easy. However I may have discovered a bug tonight when rebooting my server to replace a failed disk in my ZFS pool.

Upon coming back up from a reboot, my ZFS pool failed to mount. Upon further investigation, this was because the zfs-mount.service Systemd unit was in a failed state because the mount point that it wanted to mount the pool too already existed.

Curious, I thought. But no worries, I ensured that the directory contained no contents and deleted it, restarted the module, and things started up just find afterwards. However, I later discovered that my nfs-server.service Systemd unit had also entered a failed state for the same reason. I restarted it and it came back up just fine, since I had previously mounted the ZFS filesystem after cleaning up the strange directory.

Anyway, later on that evening I had to restart the server for another, unrelated, reason. And discovered on boot that both of those units had failed once more! And that mysterious directory was back again. However, it contained a subdirectory, `backups/`. Interesting I thought. I spent some time checking out my crontab and enabled Systemd units and could not track down the cause for this directory to be created on boot.

So I set up a `watch ls` on the directory where my mountpoint lived and then deleted the directory. A few moments later it showed up again. I repeated this several times before I realized that Proxmox itself must be creating the directory. I navigated to The Backups section for my Datacenter and disabled my VM backups all together and tried again. But the directory was still showing up. So I navigated to storage and disabled the Backups device entirely. After which the directory was no longer being created.

I did not notice this behavior in Proxmox 5.3, or any previous version of Proxmox that I have ran on my hardware. To me this seems like a bug, but I could be wrong. Is it not kosher to be putting your VM backups on the ZFS pool present within Proxmox? This race condition seems unavoidable if you choose to create the Backups device on your ZFS pool, as Proxmox starts up before the zfs-mount.service Unit attempts to mount the pool.

I'll proceed without VM backups enabled for the time being. But I would greatly appreciate some advice from the Proxmox team on whether or not this is user error or a legitimate bug. Thank you for your time!
 
The folders are automatically created on an activated directory based storage. So I assume the ZFS pool has been configured as such. Then set the 'is_mountpoint yes' in the storage.cfg for that storage, this will check if it is mounted before creating the directories. And If you don't want the directories to be created, set the 'mkdir no' as well.
 
All works after reboot.
One correction, right variant - mkdir 0
My working config from storage.cfg

dir: storagepool-backup
path /storagepool/backup/
is_mountpoint yes
mkdir 0
content backup
maxfiles 10
shared 0
 
The folders are automatically created on an activated directory based storage. So I assume the ZFS pool has been configured as such. Then set the 'is_mountpoint yes' in the storage.cfg for that storage, this will check if it is mounted before creating the directories. And If you don't want the directories to be created, set the 'mkdir no' as well.

Alwin, i need help. Something strange is going on with mount zfs pools in 6.0.
I think it race with mount zfs pool on startup, but cannot understand where.
Upgraded 5.4 to 6.0 -

What i make:

1. create pool
zpool create -f -o ashift=12 -O compression=lz4 -O checksum=edonr samsung-pm863a-S2NFNXAG901406L /dev/mapper/35002538c40066bee
i use multipath to this ssd (external sas enclosure)

2. add zfs storage in /etc/pve/storage.cfg
zfspool: sams-pm863a-406L
pool samsung-pm863a-S2NFNXAG901406L
blocksize 16k
content rootdir,images
sparse 1

3. move all container to this storage.

4. After reboot - all containers are stopped
root@hst-cl-prx-06:~# pct list
VMID Status Lock Name
601 stopped vps
602 stopped vs.vps
603 stopped 23.kg
604 stopped vm-djng-01
605 stopped www.myweb
606 stopped www.testing
607 stopped dns1
608 stopped ga
609 stopped ma.to
610 stopped vps-01.
611 stopped vps-01.
612 stopped vps-02.
613 stopped dream
614 stopped dental

5. prx-06:~# ls -la /samsung-pm863a-S2NFNXAG901406L/
total 9
drwxr-xr-x 2 root root 2 Jul 25 02:52 .
drwxr-xr-x 24 root root 24 Jul 25 18:16 ..

It's empty - no mounted directory for containers.

6. export pool and wait 10 second
prx-06:~# zpool export samsung-pm863a-S2NFNXAG901406L
and -
prx-06:~# ls -la /samsung-pm863a-S2NFNXAG901406L/
total 136
drwxr-xr-x 16 root root 16 Jul 25 14:27 .
drwxr-xr-x 24 root root 24 Jul 25 18:16 ..
drwxr-xr-x 21 root root 21 Jul 25 14:03 subvol-601-disk-0
drwxr-xr-x 21 root root 21 Jul 25 14:12 subvol-602-disk-0
drwxr-xr-x 21 100000 100000 21 Jul 25 17:48 subvol-603-disk-0
drwxr-xr-x 21 root root 21 Jul 25 14:27 subvol-604-disk-0
drwxr-xr-x 21 root root 23 Jul 25 14:27 subvol-605-disk-0
drwxr-xr-x 21 root root 21 Jul 25 14:27 subvol-606-disk-0
drwxr-xr-x 20 root root 20 Jul 25 14:40 subvol-607-disk-0
drwxr-xr-x 21 root root 21 Jul 25 14:40 subvol-608-disk-0
drwxr-xr-x 22 root root 22 Jul 25 14:41 subvol-609-disk-0
drwxr-xr-x 22 root root 23 Jul 25 14:41 subvol-610-disk-0
drwxr-xr-x 22 root root 22 Jul 25 14:41 subvol-611-disk-0
drwxr-xr-x 21 root root 22 Jul 25 14:41 subvol-612-disk-0
drwxr-xr-x 21 root root 21 Jul 25 14:41 subvol-613-disk-0
drwxr-xr-x 21 root root 21 Jul 25 14:41 subvol-614-disk-0

System info:
prx-06:~# pveversion -V
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-4.15: 5.4-6
pve-kernel-5.0.15-1-pve: 5.0.15-1
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-17-pve: 4.15.18-43
ceph: 14.2.1-pve2
ceph-fuse: 14.2.1-pve2
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksmtuned: 4.20150325+b1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
openvswitch-switch: 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-4
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-5
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
pve-zsync: 2.0-1
qemu-server: 6.0-5
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1

How can i make to stop creating directory before importing pool or how can i debug that?
Thanks!
 
3. move all container to this storage.
It doesn't seem so, as the subvols were created on the rpool.

Can you please post a 'zfs list', 'zpool list' and a 'zfs get all samsung-pm863a-S2NFNXAG901406L'?
 
It doesn't seem so, as the subvols were created on the rpool.

Can you please post a 'zfs list', 'zpool list' and a 'zfs get all samsung-pm863a-S2NFNXAG901406L'?

1.
prx-06:~$ sudo -i
root@hst-cl-prx-06:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 7.82G 891G 96K /rpool
rpool/ROOT 6.53G 891G 96K /rpool/ROOT
rpool/ROOT/pve-1 6.53G 891G 6.49G /
rpool/data 1.24G 891G 96K /rpool/data
rpool/data/vm-6001-disk-0 1.24G 891G 1.24G -
samsung-pm863a-S2NFNXAG901404K 341G 520G 96K /samsung-pm863a-S2NFNXAG901404K
samsung-pm863a-S2NFNXAG901404K/vm-6008-disk-0 341G 520G 337G -
samsung-pm863a-S2NFNXAG901406L 119G 741G 184K /samsung-pm863a-S2NFNXAG901406L
samsung-pm863a-S2NFNXAG901406L/subvol-601-disk-0 420M 14.6G 420M /samsung-pm863a-S2NFNXAG901406L/subvol-601-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-602-disk-0 1.72G 18.3G 1.72G /samsung-pm863a-S2NFNXAG901406L/subvol-602-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-603-disk-0 78.0G 12.0G 78.0G /samsung-pm863a-S2NFNXAG901406L/subvol-603-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-604-disk-0 1.27G 6.73G 1.27G /samsung-pm863a-S2NFNXAG901406L/subvol-604-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-605-disk-0 1.20G 38.8G 1.20G /samsung-pm863a-S2NFNXAG901406L/subvol-605-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-606-disk-0 8.37G 31.7G 8.27G /samsung-pm863a-S2NFNXAG901406L/subvol-606-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-607-disk-0 623M 24.4G 623M /samsung-pm863a-S2NFNXAG901406L/subvol-607-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-608-disk-0 1.89G 18.1G 1.86G /samsung-pm863a-S2NFNXAG901406L/subvol-608-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-609-disk-0 1.78G 8.30G 1.70G /samsung-pm863a-S2NFNXAG901406L/subvol-609-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-610-disk-0 720M 9.33G 683M /samsung-pm863a-S2NFNXAG901406L/subvol-610-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-611-disk-0 1.45G 8.55G 1.45G /samsung-pm863a-S2NFNXAG901406L/subvol-611-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-612-disk-0 1.13G 18.9G 1.06G /samsung-pm863a-S2NFNXAG901406L/subvol-612-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-613-disk-0 432M 9.58G 432M /samsung-pm863a-S2NFNXAG901406L/subvol-613-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-614-disk-0 1.37G 23.6G 1.37G /samsung-pm863a-S2NFNXAG901406L/subvol-614-disk-0
samsung-pm863a-S2NFNXAG901406L/vm-6001-disk-0 16.5G 741G 16.1G -
samsung-pm863a-S2NFNXAG901406L/vm-6010-disk-0 2.57G 741G 2.54G -

2.
prx-06:~# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 928G 7.82G 920G - - 0% 0% 1.00x ONLINE -
samsung-pm863a-S2NFNXAG901404K 888G 341G 547G - - 2% 38% 1.00x ONLINE -
samsung-pm863a-S2NFNXAG901406L 888G 119G 769G - - 0% 13% 1.00x ONLINE -

3.
prx-06:~# zfs get all samsung-pm863a-S2NFNXAG901406L
NAME PROPERTY VALUE SOURCE
samsung-pm863a-S2NFNXAG901406L type filesystem -
samsung-pm863a-S2NFNXAG901406L creation Thu Jul 25 13:54 2019 -
samsung-pm863a-S2NFNXAG901406L used 119G -
samsung-pm863a-S2NFNXAG901406L available 741G -
samsung-pm863a-S2NFNXAG901406L referenced 184K -
samsung-pm863a-S2NFNXAG901406L compressratio 1.35x -
samsung-pm863a-S2NFNXAG901406L mounted yes -
samsung-pm863a-S2NFNXAG901406L quota none default
samsung-pm863a-S2NFNXAG901406L reservation none default
samsung-pm863a-S2NFNXAG901406L recordsize 128K default
samsung-pm863a-S2NFNXAG901406L mountpoint /samsung-pm863a-S2NFNXAG901406L default
samsung-pm863a-S2NFNXAG901406L sharenfs off default
samsung-pm863a-S2NFNXAG901406L checksum edonr local
samsung-pm863a-S2NFNXAG901406L compression lz4 local
samsung-pm863a-S2NFNXAG901406L atime off local
samsung-pm863a-S2NFNXAG901406L devices on default
samsung-pm863a-S2NFNXAG901406L exec on default
samsung-pm863a-S2NFNXAG901406L setuid on default
samsung-pm863a-S2NFNXAG901406L readonly off default
samsung-pm863a-S2NFNXAG901406L zoned off default
samsung-pm863a-S2NFNXAG901406L snapdir hidden default
samsung-pm863a-S2NFNXAG901406L aclinherit restricted default
samsung-pm863a-S2NFNXAG901406L createtxg 1 -
samsung-pm863a-S2NFNXAG901406L canmount on default
samsung-pm863a-S2NFNXAG901406L xattr sa local
samsung-pm863a-S2NFNXAG901406L copies 1 default
samsung-pm863a-S2NFNXAG901406L version 5 -
samsung-pm863a-S2NFNXAG901406L utf8only off -
samsung-pm863a-S2NFNXAG901406L normalization none -
samsung-pm863a-S2NFNXAG901406L casesensitivity sensitive -
samsung-pm863a-S2NFNXAG901406L vscan off default
samsung-pm863a-S2NFNXAG901406L nbmand off default
samsung-pm863a-S2NFNXAG901406L sharesmb off default
samsung-pm863a-S2NFNXAG901406L refquota none default
samsung-pm863a-S2NFNXAG901406L refreservation none default
samsung-pm863a-S2NFNXAG901406L guid 15683111786726614252 -
samsung-pm863a-S2NFNXAG901406L primarycache all local
samsung-pm863a-S2NFNXAG901406L secondarycache all default
samsung-pm863a-S2NFNXAG901406L usedbysnapshots 0B -
samsung-pm863a-S2NFNXAG901406L usedbydataset 184K -
samsung-pm863a-S2NFNXAG901406L usedbychildren 119G -
samsung-pm863a-S2NFNXAG901406L usedbyrefreservation 0B -
samsung-pm863a-S2NFNXAG901406L logbias latency default
samsung-pm863a-S2NFNXAG901406L objsetid 54 -
samsung-pm863a-S2NFNXAG901406L dedup off default
samsung-pm863a-S2NFNXAG901406L mlslabel none default
samsung-pm863a-S2NFNXAG901406L sync standard default
samsung-pm863a-S2NFNXAG901406L dnodesize legacy default
samsung-pm863a-S2NFNXAG901406L refcompressratio 1.00x -
samsung-pm863a-S2NFNXAG901406L written 0 -
samsung-pm863a-S2NFNXAG901406L logicalused 158G -
samsung-pm863a-S2NFNXAG901406L logicalreferenced 83K -
samsung-pm863a-S2NFNXAG901406L volmode default default
samsung-pm863a-S2NFNXAG901406L filesystem_limit none default
samsung-pm863a-S2NFNXAG901406L snapshot_limit none default
samsung-pm863a-S2NFNXAG901406L filesystem_count none default
samsung-pm863a-S2NFNXAG901406L snapshot_count none default
samsung-pm863a-S2NFNXAG901406L snapdev hidden default
samsung-pm863a-S2NFNXAG901406L acltype off default
samsung-pm863a-S2NFNXAG901406L context none default
samsung-pm863a-S2NFNXAG901406L fscontext none default
samsung-pm863a-S2NFNXAG901406L defcontext none default
samsung-pm863a-S2NFNXAG901406L rootcontext none default
samsung-pm863a-S2NFNXAG901406L relatime off local
samsung-pm863a-S2NFNXAG901406L redundant_metadata all default
samsung-pm863a-S2NFNXAG901406L overlay off default
samsung-pm863a-S2NFNXAG901406L encryption off default
samsung-pm863a-S2NFNXAG901406L keylocation none default
samsung-pm863a-S2NFNXAG901406L keyformat none default
samsung-pm863a-S2NFNXAG901406L pbkdf2iters 0 default
samsung-pm863a-S2NFNXAG901406L special_small_blocks 0 default
 
Is multipath working correctly? It looks like, the pool is mounted twice, since you had to export the pool to make the subvols visible.
 
Is multipath working correctly? It looks like, the pool is mounted twice, since you had to export the pool to make the subvols visible.

Now I'm not sure that multipath working correctly.
I have a suspicion that the pool is imported before the multupath starts.
Is there some way to figure it out? or should blacklist disk in multipath and restart host?
 
Multipath should blacklist one disk, so the system only sees one. But as the second mount was empty, I suppose something is not right with it. As the same content should have been presented in both cases. The dmesg may show something about it.
 
Multipath should blacklist one disk, so the system only sees one. But as the second mount was empty, I suppose something is not right with it. As the same content should have been presented in both cases. The dmesg may show something about it.

i've removed multipath completely. No effect.
Something at the start creates a path and then the pool can't export into it.

prx-06:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 5.5T 0 disk
sdb 8:16 0 5.5T 0 disk
sdc 8:32 0 5.5T 0 disk
sdd 8:48 0 5.5T 0 disk
sde 8:64 0 5.5T 0 disk
sdf 8:80 0 5.5T 0 disk
sdg 8:96 0 5.5T 0 disk
sdh 8:112 0 5.5T 0 disk
sdi 8:128 0 5.5T 0 disk
sdj 8:144 0 5.5T 0 disk
sdk 8:160 0 5.5T 0 disk
sdl 8:176 0 5.5T 0 disk
sdm 8:192 0 5.5T 0 disk
sdn 8:208 0 5.5T 0 disk
sdo 8:224 0 5.5T 0 disk
sdp 8:240 0 5.5T 0 disk
sr0 11:0 1 1024M 0 rom
sdq 65:0 0 5.5T 0 disk
sdr 65:16 0 5.5T 0 disk
sds 65:32 0 5.5T 0 disk
sdt 65:48 0 5.5T 0 disk
sdu 65:64 0 5.5T 0 disk
sdv 65:80 0 5.5T 0 disk
sdw 65:96 0 5.5T 0 disk
sdx 65:112 0 5.5T 0 disk
sdy 65:128 0 931.5G 0 disk
├─sdy1 65:129 0 1007K 0 part
├─sdy2 65:130 0 512M 0 part
└─sdy3 65:131 0 930.5G 0 part
sdz 65:144 0 931.5G 0 disk
├─sdz1 65:145 0 1007K 0 part
├─sdz2 65:146 0 512M 0 part
└─sdz3 65:147 0 930.5G 0 part
sdaa 65:160 0 894.3G 0 disk
sdab 65:176 0 894.3G 0 disk
sdac 65:192 0 894.3G 0 disk
sdad 65:208 0 745.2G 0 disk
├─sdad1 65:209 0 745.2G 0 part
└─sdad9 65:217 0 8M 0 part
zd0 230:0 0 200G 0 disk
├─zd0p1 230:1 0 195.6G 0 part
└─zd0p2 230:2 0 4.4G 0 part
zd16 230:16 0 200G 0 disk
├─zd16p1 230:17 0 195.6G 0 part
└─zd16p2 230:18 0 4.4G 0 part
zd32 230:32 0 15G 0 disk
├─zd32p1 230:33 0 1M 0 part
└─zd32p2 230:34 0 15G 0 part
zd48 230:48 0 400G 0 disk
└─zd48p1 230:49 0 390.6G 0 part


prx-06:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 7.87G 891G 96K /rpool
rpool/ROOT 6.58G 891G 96K /rpool/ROOT
rpool/ROOT/pve-1 6.58G 891G 6.49G /
rpool/data 1.24G 891G 96K /rpool/data
rpool/data/vm-6001-disk-0 1.24G 891G 1.24G -
samsung-pm863a-S2NFNXAG901404K 341G 519G 96K /samsung-pm863a-S2NFNXAG901404K
samsung-pm863a-S2NFNXAG901404K/vm-6008-disk-0 341G 519G 337G -
samsung-pm863a-S2NFNXAG901406L 120G 741G 184K /samsung-pm863a-S2NFNXAG901406L
samsung-pm863a-S2NFNXAG901406L/subvol-601-disk-0 420M 14.6G 420M /samsung-pm863a-S2NFNXAG901406L/subvol-601-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-602-disk-0 1.78G 18.3G 1.72G /samsung-pm863a-S2NFNXAG901406L/subvol-602-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-603-disk-0 78.0G 12.0G 78.0G /samsung-pm863a-S2NFNXAG901406L/subvol-603-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-604-disk-0 1.27G 6.73G 1.27G /samsung-pm863a-S2NFNXAG901406L/subvol-604-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-605-disk-0 1.20G 38.8G 1.20G /samsung-pm863a-S2NFNXAG901406L/subvol-605-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-606-disk-0 8.37G 31.7G 8.27G /samsung-pm863a-S2NFNXAG901406L/subvol-606-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-607-disk-0 623M 24.4G 623M /samsung-pm863a-S2NFNXAG901406L/subvol-607-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-608-disk-0 1.89G 18.1G 1.86G /samsung-pm863a-S2NFNXAG901406L/subvol-608-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-609-disk-0 1.78G 8.30G 1.70G /samsung-pm863a-S2NFNXAG901406L/subvol-609-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-610-disk-0 720M 9.33G 683M /samsung-pm863a-S2NFNXAG901406L/subvol-610-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-611-disk-0 1.45G 8.56G 1.44G /samsung-pm863a-S2NFNXAG901406L/subvol-611-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-612-disk-0 1.14G 18.9G 1.06G /samsung-pm863a-S2NFNXAG901406L/subvol-612-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-613-disk-0 432M 9.58G 432M /samsung-pm863a-S2NFNXAG901406L/subvol-613-disk-0
samsung-pm863a-S2NFNXAG901406L/subvol-614-disk-0 1.42G 23.6G 1.37G /samsung-pm863a-S2NFNXAG901406L/subvol-614-disk-0
samsung-pm863a-S2NFNXAG901406L/vm-6001-disk-0 16.5G 741G 16.1G -
samsung-pm863a-S2NFNXAG901406L/vm-6010-disk-0 2.67G 741G 2.54G -

prx-06:~# ls -la /samsung-pm863a-S2NFNXAG901406L/
total 9
drwxr-xr-x 2 root root 2 Jul 25 13:54 .
drwxr-xr-x 24 root root 24 Jul 26 06:26 ..

All containers stopped, as before

prx-06:~# pct list
VMID Status Lock Name
601 stopped vps
602 stopped vs.vps
603 stopped 23.kg
604 stopped vm-djng-01
605 stopped www.myweb
606 stopped www.testing
607 stopped dns1
608 stopped ga
609 stopped ma.to
610 stopped vps-01.
611 stopped vps-01.
612 stopped vps-02.
613 stopped dream
614 stopped dental

prx-06:~# zpool status
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0 days 00:05:54 with 0 errors on Sun Jul 14 06:29:56 2019
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdz3 ONLINE 0 0 0
sdy3 ONLINE 0 0 0

errors: No known data errors

pool: samsung-pm863a-S2NFNXAG901404K
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
samsung-pm863a-S2NFNXAG901404K ONLINE 0 0 0
wwn-0x5002538c40066bec ONLINE 0 0 0

errors: No known data errors

pool: samsung-pm863a-S2NFNXAG901406L
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
samsung-pm863a-S2NFNXAG901406L ONLINE 0 0 0
wwn-0x5002538c40066bee ONLINE 0 0 0
 
Can you please post those thing in code tags? It will keep the format, otherwise it is just hard to read.
i've removed multipath completely. No effect.
Something at the start creates a path and then the pool can't export into it.
The point I tried to make is, with multipath the same disk is presented twice to the system. The multipath tools blacklist one path, so the system has only one disk. With that said, without multipath tools the contents should be exactly the same on those two presented disks. But one is empty.

With the multipath tool you can check the topology and get some more information on those disks.
 
Can you please post those thing in code tags? It will keep the format, otherwise it is just hard to read.

The point I tried to make is, with multipath the same disk is presented twice to the system. The multipath tools blacklist one path, so the system has only one disk. With that said, without multipath tools the contents should be exactly the same on those two presented disks. But one is empty.

With the multipath tool you can check the topology and get some more information on those disks.

I understand that, so I deleted multipath.
What should i do now? disable one by one services, that mount zfs on boot?
 
You need multipath tools if you have disks presented by multiple paths. If the pool is twice found on the system it will be always mounted twice.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!