I have a 4 node 9.1.5 cluster where each node has a 300GB boot disk and second disk (1TB SSD). The second disk on all nodes is defined as DIR storage named dir1. /etc/pve/storage.cfg shows the following
dir: dir1
path /mnt/pve/dir1
contents snippets,backup,vztmpl,images,rootdir,iso
is_mountpoint 1
noden pn1,pn2,pn3,pn4
I removed a node from the cluster, rebuilt and added back. However, dir1 storage shows greyed out on this node. When I try to add secondary storage via GUI as DIR dir1 it says name already used (or similar). I manually from cli created the mountpoint, created disk partition, formatted partition, and can manually mount it. However, it is gone after a reboot. I copied missing file /etc/systemd/system/mnt-pve-dir1.mount from another node and updated uuid to match node drive, however it is still greyed out after a reboot. What else is needed to get the drive back after a reboot?
dir: dir1
path /mnt/pve/dir1
contents snippets,backup,vztmpl,images,rootdir,iso
is_mountpoint 1
noden pn1,pn2,pn3,pn4
I removed a node from the cluster, rebuilt and added back. However, dir1 storage shows greyed out on this node. When I try to add secondary storage via GUI as DIR dir1 it says name already used (or similar). I manually from cli created the mountpoint, created disk partition, formatted partition, and can manually mount it. However, it is gone after a reboot. I copied missing file /etc/systemd/system/mnt-pve-dir1.mount from another node and updated uuid to match node drive, however it is still greyed out after a reboot. What else is needed to get the drive back after a reboot?