A configuration question -
I recently acquired a new Threadripper Pro workstation. I am trying to do what this person did, but have a lot of existing ZFS data on my old workstation: so, what's the best way to move my existing ZFS storage from my old soon-to-be-retired box, and the "best" way to access it from a PVE VM on this new workstation?
FIrst, I plan to P2V my old-workstation daily driver (Kubuntu) to a new PVE VM with GPU passthrough. I don't have a NAS and unfortunately, my existing PVE nodes do not have enough spare capacity to migrate, and not enough new storage to migrate to either .
(This will be the 3rd node in a PVE cluster. This PVE VM and this data will never migrate to other cluster nodes. Would prefer 3 nodes to be the max if possible....)
I have quite a few HDDs on my old workstation, with a lot of data, large home dirs and LXD containers, all in ZFS pools (and yes, the LXD containers will eventually migrate to PVE CTs... but later). Zpool versions should be the same between (K)Ubuntu and PVE 7.x.
Question: what's the best way to move and then access the current ZFS data from a new PVE VM?
1) Move HDDs and passthrough my zfs HDDs to the PVE VM and take a "double ARC" hit? ARC in the VM, and ARC on the PVE host (seems a waste) .
I already take a "double ARC" hit with a pfSense VM on another node (although it's ZFS is tuned), and this is fine for "light use". ZFS in this new VM would NOT be "light" :-(
2) Move HDDs to a PVE level zpool (zpool export/import), then use the VM guest p9 client pass-through from the PVE VM? (I've read performance is *really* not good)
( If this was a PVE CT, I'd bind-mount the dir and call it a day.... but it's a VM.)
3) Move HDDs to a PVE level zpool, then set up NFS to the Kubuntu PVE VM (on the same node) and pretend the PVE host is a NFS server to the VM nfs client. I'd have to traverse TWO tcp/ip stacks in the process, but probably much better than a "double ARC hit"... ? Never tried this so what kind of performance could I expect?
I am thinking option 3 would be best... Thoughts? Other options?
Lastly, I will be adding more HDDs in the future to this new node (and other nodes) but it's a slow process...
Any other ideas or input would be appreciated, especially if I'm missing the obvious (which I could be).
Thanks,
Bob
I recently acquired a new Threadripper Pro workstation. I am trying to do what this person did, but have a lot of existing ZFS data on my old workstation: so, what's the best way to move my existing ZFS storage from my old soon-to-be-retired box, and the "best" way to access it from a PVE VM on this new workstation?
FIrst, I plan to P2V my old-workstation daily driver (Kubuntu) to a new PVE VM with GPU passthrough. I don't have a NAS and unfortunately, my existing PVE nodes do not have enough spare capacity to migrate, and not enough new storage to migrate to either .
(This will be the 3rd node in a PVE cluster. This PVE VM and this data will never migrate to other cluster nodes. Would prefer 3 nodes to be the max if possible....)
I have quite a few HDDs on my old workstation, with a lot of data, large home dirs and LXD containers, all in ZFS pools (and yes, the LXD containers will eventually migrate to PVE CTs... but later). Zpool versions should be the same between (K)Ubuntu and PVE 7.x.
Question: what's the best way to move and then access the current ZFS data from a new PVE VM?
1) Move HDDs and passthrough my zfs HDDs to the PVE VM and take a "double ARC" hit? ARC in the VM, and ARC on the PVE host (seems a waste) .
I already take a "double ARC" hit with a pfSense VM on another node (although it's ZFS is tuned), and this is fine for "light use". ZFS in this new VM would NOT be "light" :-(
2) Move HDDs to a PVE level zpool (zpool export/import), then use the VM guest p9 client pass-through from the PVE VM? (I've read performance is *really* not good)
( If this was a PVE CT, I'd bind-mount the dir and call it a day.... but it's a VM.)
3) Move HDDs to a PVE level zpool, then set up NFS to the Kubuntu PVE VM (on the same node) and pretend the PVE host is a NFS server to the VM nfs client. I'd have to traverse TWO tcp/ip stacks in the process, but probably much better than a "double ARC hit"... ? Never tried this so what kind of performance could I expect?
I am thinking option 3 would be best... Thoughts? Other options?
Lastly, I will be adding more HDDs in the future to this new node (and other nodes) but it's a slow process...
Any other ideas or input would be appreciated, especially if I'm missing the obvious (which I could be).
Thanks,
Bob