hello community,
just starting to play around with proxmox ve with various configuration including file based shared storage (CIFS and NFS) and i've got some problems with this kind of setup.
my lab is built as following:
three node proxmox ve 8.4.1 hyperconverged cluster where every node is connected to a single 10gbe switch with a dual 10gbe nic channel bond
a file based storage (netapp aff a200) connected to the same switch via 10gbe channel bond with CIFS & NFS shares exposed to proxmox cluster
i did some testing on this setup and i've got the following results:
i've tried to change preallocation policy for qcow2 files but both "full" options are unusable as QCOW2 creation ends up with errors on both CIFS and NFS
for NFS tried both V3 and V4.x protocol versions with same results
are those behaviours expected on Proxmox VE with this kind of setup? they seems a bit strange to me as NFS should be a better option on UNIX systems than CIFS..
Kind regards
Alberto
just starting to play around with proxmox ve with various configuration including file based shared storage (CIFS and NFS) and i've got some problems with this kind of setup.
my lab is built as following:
three node proxmox ve 8.4.1 hyperconverged cluster where every node is connected to a single 10gbe switch with a dual 10gbe nic channel bond
a file based storage (netapp aff a200) connected to the same switch via 10gbe channel bond with CIFS & NFS shares exposed to proxmox cluster
i did some testing on this setup and i've got the following results:
- read /writes i/o performance are good on both CIFS and NFS, but better on CIFS with less load on storage CPU
- writes i/o performance when VM is under snapshot are bad on NFS and good on CIFS with iops falling deeply on NFS (but Netapp storage still show high i/o activity)
- snapshot removal takes a long time with NFS with vm freezed and inaccesible from both console & network, while everything is much more smooth and fast with CIFS
- clone operation from CEPH backed or locally hosted VM to NFS storage works well, while the same VM cloned to CIFS share ends up in a corrupted QCOW2 file (i/o error accessing the file even from console)
- restoring vm backup to both CIFS and NFS works as expected
i've tried to change preallocation policy for qcow2 files but both "full" options are unusable as QCOW2 creation ends up with errors on both CIFS and NFS
for NFS tried both V3 and V4.x protocol versions with same results
are those behaviours expected on Proxmox VE with this kind of setup? they seems a bit strange to me as NFS should be a better option on UNIX systems than CIFS..
Kind regards
Alberto