Hi all,
Proxmox VE community subscription user.
I have a newer server a Dell R640 with a 96GB Ram, Xeon Gold 6134. Storage is a Samsung PM1735 1.6TB NVMe SSD, 2x Intel 480GB SSD in ZFS Mirror for boot, 2x 2TB WD Gold HDD in ZFS Mirror.
When a big disk operation such as a backup restore is happening then the IO delay starts to build and then the VMs complain about memory issues and then fail (bluescreen or lock up). These are mostly Windows VMs as I disable memory ballooning on them. When all VMs are booted the utilised memory sits around 60% normally so have 30-40GB free of RAM.
I've guessed this may be ZFS is using up too much RAM while doing operations especially when writing to slower media such as the WD Gold HDDs. I've tried to limit RAM usage but this hasn't seemed to make any difference.
Is there someone with similar issues that can help? Restore was from an NFS share on a 1G link so shouldn't saturate the host which is connected at 10G.
Many thanks,
Proxmox VE community subscription user.
I have a newer server a Dell R640 with a 96GB Ram, Xeon Gold 6134. Storage is a Samsung PM1735 1.6TB NVMe SSD, 2x Intel 480GB SSD in ZFS Mirror for boot, 2x 2TB WD Gold HDD in ZFS Mirror.
When a big disk operation such as a backup restore is happening then the IO delay starts to build and then the VMs complain about memory issues and then fail (bluescreen or lock up). These are mostly Windows VMs as I disable memory ballooning on them. When all VMs are booted the utilised memory sits around 60% normally so have 30-40GB free of RAM.
I've guessed this may be ZFS is using up too much RAM while doing operations especially when writing to slower media such as the WD Gold HDDs. I've tried to limit RAM usage but this hasn't seemed to make any difference.
Is there someone with similar issues that can help? Restore was from an NFS share on a 1G link so shouldn't saturate the host which is connected at 10G.
Many thanks,