> The Linux container support for OpenZFS 2.2 includes IDMAPPED mounts in the user name-space, OverlayFS support, and Linux namespace delegation support.
It's a pity as there seem to be some very useful features in 2.2 - but I also drew a blank.
Hopefully it'll get a full release soon and supported packages should start appearing.
I know it's not just packages you need to update - I'm following the zfs compilation guide, building from the tar.gz (Which isn't, incidentally, a kernel recompile - but it would not be an issue as far as I'm concerned even if it did).
Has anyone tried to upgrade to the latest bleeding-edge ZFS ?
I get quite far, but I get problems here, when trying to install:
Preparing to unpack libnvpair3_2.2.0-0_amd64.deb ...
Unpacking libnvpair3 (2.2.0-0) ...
Replaced by files in installed package libnvpair3linux (2.1.12-pve1) ...
dpkg...
Good to know; I presume though after a 'hard reset' those non-HA VMs still seemed to be down.
Would it not be possible to implement this so that doing this and taking down everything was not required?
At the very least, a warning in the shutdown dialog might be in order. I seem not to be the...
Thanks for the docs link. We'll be combing over that as the plan is to upgrade to a proper 3-node cluster as soon as the box arrives.
I guess the behaviours that are surprising are:
- that it took down VMs and CTs that are not configured to be HA at all.
- that there is no warning in the UI...
So yes, it seems to be related to this.
But: we're not exactly using HA. We have a single VM (that is on proxmox1) set up as an HA resource as an experiment.
Everything else is VMs / CTs living on one or the other box using no HA features at all.
I can understand, perhaps, why that single...
We have a 2-node cluster -- proxmox1, proxmox2.
Using the web UI to 'shut down' on proxmox2 (in order to add more memory) caused *ALL* VMs to shutdown, without warning.
Why did it do this?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.