Assuming a setup where there is a separate boot disk or disks, and then ZFS storage pools for the actual VMs to live on, how much traffic, on a general basis, and especially write traffic, should be going to the root disk of a Proxmox server?
We have two large, multi-core hypervisors with over 300 gig of memory each and lots of VMs. Originally they booted from a single SSD in disk slot 0, which defined as /dev/sda. Last week one of the boot disks failed so we've rebuilt the HV. Initially I tried to have it boot from a root ZFS mirror using /dev/sda and /dev/sdb but though it installed fine, it then refused to boot. As the machine does have an LSI raid card I gave up on ZFS boot and created a Raid1 array of the first two disks, which Proxmox sees as just /dev/sda, and installed to that. It works and boots fine, however I get the impression performance of the root file system is somewhat slower than when it was a single SSD, and I'm also noticing wait state showing up in top. I suspect that due to dead/missing raid controller batteries it's forced itself to confirmed write mode, which even with SSDs is slowing things down somewhat.
Because the other machine has a boot disk of similar age to the one which failed we are planning to rebuild and reinstall that too, but I'm wondering if, instead of creating a hardware raid pair, would it make more sense to have / as /dev/sda but /var (and any other heavily-written-to partitions) farmed off onto /dev/sdb, such that the second disk can be replaced periodically without needing to reinstall the entire system, and the actual boot disk, having very little traffic, should then give long-term endurance?
So interested to know what, if any, areas of the Proxmox root file system are heavily written to in the course of normal operations.
We have two large, multi-core hypervisors with over 300 gig of memory each and lots of VMs. Originally they booted from a single SSD in disk slot 0, which defined as /dev/sda. Last week one of the boot disks failed so we've rebuilt the HV. Initially I tried to have it boot from a root ZFS mirror using /dev/sda and /dev/sdb but though it installed fine, it then refused to boot. As the machine does have an LSI raid card I gave up on ZFS boot and created a Raid1 array of the first two disks, which Proxmox sees as just /dev/sda, and installed to that. It works and boots fine, however I get the impression performance of the root file system is somewhat slower than when it was a single SSD, and I'm also noticing wait state showing up in top. I suspect that due to dead/missing raid controller batteries it's forced itself to confirmed write mode, which even with SSDs is slowing things down somewhat.
Because the other machine has a boot disk of similar age to the one which failed we are planning to rebuild and reinstall that too, but I'm wondering if, instead of creating a hardware raid pair, would it make more sense to have / as /dev/sda but /var (and any other heavily-written-to partitions) farmed off onto /dev/sdb, such that the second disk can be replaced periodically without needing to reinstall the entire system, and the actual boot disk, having very little traffic, should then give long-term endurance?
So interested to know what, if any, areas of the Proxmox root file system are heavily written to in the course of normal operations.