Proxmox zfs lost all data

gamera

New Member
Sep 26, 2016
14
2
3
35
Proxmox 4.2. create a cluster to another server, and lost all the data, did with official on a manual. local-zfs storage shows 0 bytes

Code:
zfs list -t all -r -o name,used,available,referenced,quota,refquota,mountpoint wdpool
output: cannot open 'wdpool': dataset does not exist

Code:
zfs list
output: no datasets available

Code:
zfs mount -a
output is empty


and in web gui i see:
local-zfs active: no
 
Last edited:
we don't do anything storage related when joining a cluster (except dropping the storage definitions in /etc/pve/storage.cfg on the joining node), so this can't be a consequence of clustering.
 
What about zpool import and zpool status -v
Code:
root@proxmox-1:~# zpool import
no pools available to import
root@proxmox-1:~# zpool status -v
no pools available

we don't do anything storage related when joining a cluster (except dropping the storage definitions in /etc/pve/storage.cfg on the joining node), so this can't be a consequence of clustering.
But this problem only if i create cluster, i test this 3 times
 
is it possible you are doing the following

node A with ZFS
node B without ZFS

create cluster on node A
add node B to cluster

in that case, your storage configuration says that a storage local-zfs with ZFS is available on all nodes, which is of course not true. if your nodes are not identical (storage wise), you need to adapt your storage configuration for the node that has joined (because the old storage configuration of that node is replaced with the one from the cluster when joining) - in this case by limiting local-zfs to node A and adding whatever storage node B has as additional storage (limited to node B)
 
is it possible you are doing the following

node A with ZFS
node B without ZFS

create cluster on node A
add node B to cluster

in that case, your storage configuration says that a storage local-zfs with ZFS is available on all nodes, which is of course not true. if your nodes are not identical (storage wise), you need to adapt your storage configuration for the node that has joined (because the old storage configuration of that node is replaced with the one from the cluster when joining) - in this case by limiting local-zfs to node A and adding whatever storage node B has as additional storage (limited to node B)
Is possible to restore data? and in node A storage in raid 1 zfs (software), Node B in hardware raid 10 zfs. This is problem if one storage in software raid 1 and second on hardware raid 10?
 
I already said this above, but I'll make it extra clear:

joining a cluster does not do anything with your storage, except that the storage configuration of the existing cluster is used instead of the old local one. the data, VM images, ... is not modified at all. if you had a zpool on your joining node before joining, it is still there after joining - but it might not be configured in PVE's storage.cfg anymore. if your zpool actually disappears when joining a cluster, you are almost certainly doing something very wrong.
 
Node B in hardware raid 10 zfs

brrrrrrrr
ZFS really hate hardware raids, it prefere to have direct access to each disks. If you are using a BBWC on the hardware controller, the risk to loose data is really high, as ZFS assume that datas are really wrote to disks when it writes. You risk to have multiple caches: the write log cache by ZFS and the hardware cache.

Additionally, proxmox doesn't support any software raid except the one coming from ZFS. I think because ZFS does it's best managing disks on their own without using hardware raid and ZFS on hardware raid is risky
 
brrrrrrrr
ZFS really hate hardware raids, it prefere to have direct access to each disks. If you are using a BBWC on the hardware controller, the risk to loose data is really high, as ZFS assume that datas are really wrote to disks when it writes. You risk to have multiple caches: the write log cache by ZFS and the hardware cache.

Additionally, proxmox doesn't support any software raid except the one coming from ZFS. I think because ZFS does it's best managing disks on their own without using hardware raid and ZFS on hardware raid is risky
and you advise me to use software raid?)
 
Alessandro is right about ZFS. Every page on the internet states that ZFS needs at most a hardware raid controller in IT mode with "real" JBOD, otherwise ZFS internal consistency cannot be guaranteed.

Please also post fdisk -l and lsblk on the server with the missing ZFS.
 
  • Like
Reactions: Alessandro 123

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!