Hello,
In my company, we have 3 proxmox servers running px 4.2, in an Cluster.
In order to upgrade to 4.4 and upgrade the storage (bigger disks in the raid array), we freed one of the servers from VMs (migrated to the two other), upgraded the disks, installed PX 4.4, and reconfigured the network as before.
Now I have a question :
On this recently installed server, we have a different storage settings (because new raid storage).
The Volume Group name and Logical Volume name are not the same, which means that in the storage.cfg we have to declare some settings that are not the same as the two other nodes (still in the Cluster).
I added the new server in the Cluster, but once added to the cluster, the cluster's storage settings overrode the settings in the file /etc/pve/storage.cfg of the new server, making the local storage (raid) not working on this new node (because VG name and LV name are not the same).
Content of the file /etc/pve/storage.cfg on the PX servers that are in the Cluster :
Content of the file /etc/pve/storage.cfg on the new PX server that is not yet in the cluster :
When I join the cluster, the lvmthin storage named "local-lvm" that was containing custom "raid10" storage disappeared, and the settings became the same as the Cluster (thinpool = data, vgname = pve).
How can I have custom storage settings on of the the node ? Are only the shared storages that have to have the same names between the nodes ? Or also the local storages ? And if we can set custom settings on some nodes, how to make them last and not updated by the Cluster's settings ?
If I change the content of the file manually on the new server, will it be updated to the two other servers that are still in the Cluster, and make them loose their local-lvm settings ? Making a huge VM failure on the VMs that are stored locally on those servers ?
Thanks
Thomas
In my company, we have 3 proxmox servers running px 4.2, in an Cluster.
In order to upgrade to 4.4 and upgrade the storage (bigger disks in the raid array), we freed one of the servers from VMs (migrated to the two other), upgraded the disks, installed PX 4.4, and reconfigured the network as before.
Now I have a question :
On this recently installed server, we have a different storage settings (because new raid storage).
The Volume Group name and Logical Volume name are not the same, which means that in the storage.cfg we have to declare some settings that are not the same as the two other nodes (still in the Cluster).
I added the new server in the Cluster, but once added to the cluster, the cluster's storage settings overrode the settings in the file /etc/pve/storage.cfg of the new server, making the local storage (raid) not working on this new node (because VG name and LV name are not the same).
Content of the file /etc/pve/storage.cfg on the PX servers that are in the Cluster :
Code:
cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,backup,vztmpl
shared
maxfiles 2
lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir
nfs: stor01-nfs1-vault1
server stor-01.domain.local
export /FS/vault1
path /mnt/pve/stor01-nfs1-vault1
content vztmpl,images,backup,iso,rootdir
options vers=3
maxfiles 4
Content of the file /etc/pve/storage.cfg on the new PX server that is not yet in the cluster :
Code:
cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,backup,vztmpl
shared
maxfiles 2
lvmthin: local-lvm
thinpool dataraid10
vgname pveraid10
content images,rootdir
nfs: stor01-nfs1-vault1
server stor-01.domain.local
export /FS/vault1
path /mnt/pve/stor01-nfs1-vault1
content vztmpl,images,backup,iso,rootdir
options vers=3
maxfiles 4
When I join the cluster, the lvmthin storage named "local-lvm" that was containing custom "raid10" storage disappeared, and the settings became the same as the Cluster (thinpool = data, vgname = pve).
How can I have custom storage settings on of the the node ? Are only the shared storages that have to have the same names between the nodes ? Or also the local storages ? And if we can set custom settings on some nodes, how to make them last and not updated by the Cluster's settings ?
If I change the content of the file manually on the new server, will it be updated to the two other servers that are still in the Cluster, and make them loose their local-lvm settings ? Making a huge VM failure on the VMs that are stored locally on those servers ?
Thanks
Thomas