shared storage by zfs-dataset not zpool

tpham

New Member
Apr 23, 2017
7
0
1
42
Hi,

I want to do live migration but I notice that proxmox enforces that when using zpool, the name of the dataset and zpool have to be the same on both promox servers. I would like to know if we can change that policy such that promox would enforce one of the following policies instead:

1. Shared name ----
ie: my-local-zfs-share
server a: rpool/data

server b: spare_zpool/proxmox.storage

PS: It is not possible to create different local shares with the same name, so I think this is harder to archive than method# 2.



2. Dataset name:

instead of using this config:
zfspool: my-local-zfs-share
pool rpool/data
content rootdir,images
nodes proxmox0,pvespare
sparse 0

we would have something like this:
zfsdataset: my-local-zfs-share
dataset proxmox0:rpool/data pvespare:spare_zpool/proxmox.storage
content rootdir,images
nodes proxmox0,pvespare
sparse 0


I am not sure if any of the developers thought of the limitation of using the same zpool name and dataset name for shared storage. As for me, I see some limitations in this design.


Thanks,

TP
 
Hi,

you can migrate to an different storage, but you have use the command line.

see
Code:
qm help migrate

I am not sure if any of the developers thought of the limitation of using the same zpool name and dataset name for shared storage. As for me, I see some limitations in this design.
This is a complete other approach and do not fit in Proxmox VE storage concept.[/CODE]
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!