I added a 10 gig adapter to my NAS and want to make sure my cluster of Proxmox systems use this connection. It is still running on 192.168.1.10 and I added .99 as the 10 gig connection.
My /etc/pve/storage.cfg looks like this, among other entries:
meanwhile, even after a reboot, the mount command returns (among many other mounts)
This mount is not found in /etc/fstab. When I move data between a cluster node and the NAS, I still see the data is moving over the .10 connection. All is online, but I want to understand why I'm not using the new connection even though I specified the IP address.
My /etc/pve/storage.cfg looks like this, among other entries:
Code:
nfs: synology
export /volume1/proxmox
path /mnt/pve/synology
server 192.168.1.99
content iso,backup,snippets,rootdir,images,vztmpl
prune-backups keep-daily=2,keep-last=2,keep-monthly=2
meanwhile, even after a reboot, the mount command returns (among many other mounts)
Code:
192.168.1.99:/volume1/proxmox on /mnt/pve/synology type nfs4 (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.3,local_lock=none,addr=192.168.1.10)
This mount is not found in /etc/fstab. When I move data between a cluster node and the NAS, I still see the data is moving over the .10 connection. All is online, but I want to understand why I'm not using the new connection even though I specified the IP address.