Hello everyone,
I recently got a dedicated server and installed Proxmox VE 7.4-17 on it. (I am planning to upgrade to Proxmox VE 8.1 A.S.A.P but there is an issue on the hoster side that makes this not possible at the moment.)
I am currently trying out a couple of possible configurations for production/deployment and would like some thoughts/recommendations on it.
I am plannng on hosting a couple of VMs with Debian 12/11 and Debian 12 + DirectAdmin.
All the VMs are for me personally and thus I am not concerned about the uptime being 100% all the time but for me reliablilaty and uptime is an important one.
The first possible configuration is:
- Proxmox VE on the 1TB SSD.
- VM disks and Cloud-Init configs on one 2TB SSD.
- Local backups of the VMs on one 2TB SSD.
The second possible configuration is:
- Proxmox VE on the 1TB SSD
- Using ZFS for both 2TB SSDs and putting them in software RAID 10 or mirror.
The third possible configuration is:
- Same as the second except using the 1TB SSD for both Proxmox VE and backups.
The upsides of the first configuration seems to be:
- The VMs have there own disk togather and thus reducing the impact on the host/Proxmox VE due to a lot of reads/writes at the same time.
- There is one disk dedicated for backups and thus allowing the boot disk and/or the VMs disk to fail without VMs data loss.
The downsides of the first configuration seems to be:
- The VM disks are not in software RAID 10/mirror and thus if the VMs disk fails it creates a lot of downtime. (Since the VMs disk need to be replaced before the VMs can be restored.)
The upsides of the second configuration seems to be:
- If one of the two disks fails for the VMs, then the VMs can still continue to run since it still has one disk for the VMs. (Since they are redundant thanks to software raid 10/mirror)
The downsides of the second configuration seems to be:
- The configuration does not allow for a backups on its own disk.
The upsides of the third configuration seems to be:
- The disks for the VMs are redundant.
- There is space for local backups.
The downsides of the third configuration seems to be:
- Proxmox VE and the backups need to share the disk and thus increasing the reads/writes and as a result wearing out the SSD quicker. (And since (as far as I know) the VM settings/configs are only stored in Proxmox VE and backups, it creates a real problem if that SSD fails)
- When backups are run, it mights slow down the host/Proxmox VE since they share the same SSD.
My server configuration:
- 1x Intel Xeon E5 2630v4 (10 cores/20 threads)
- 1x 256 GB REG ECC
- 1x 1 TB SSD (No RAID) - Boot disk (Crucial MX500 1TB 3D NAND SATA 2.5-inch/CT1000MX500SSD1)
- 2 x 2 TB SSD (No RAID) (Crucial MX500 2TB 3D NAND SATA 2.5-inch/CT2000MX500SSD1)
Any advice and/or suggetions are appriciated.
I recently got a dedicated server and installed Proxmox VE 7.4-17 on it. (I am planning to upgrade to Proxmox VE 8.1 A.S.A.P but there is an issue on the hoster side that makes this not possible at the moment.)
I am currently trying out a couple of possible configurations for production/deployment and would like some thoughts/recommendations on it.
I am plannng on hosting a couple of VMs with Debian 12/11 and Debian 12 + DirectAdmin.
All the VMs are for me personally and thus I am not concerned about the uptime being 100% all the time but for me reliablilaty and uptime is an important one.
The first possible configuration is:
- Proxmox VE on the 1TB SSD.
- VM disks and Cloud-Init configs on one 2TB SSD.
- Local backups of the VMs on one 2TB SSD.
The second possible configuration is:
- Proxmox VE on the 1TB SSD
- Using ZFS for both 2TB SSDs and putting them in software RAID 10 or mirror.
The third possible configuration is:
- Same as the second except using the 1TB SSD for both Proxmox VE and backups.
The upsides of the first configuration seems to be:
- The VMs have there own disk togather and thus reducing the impact on the host/Proxmox VE due to a lot of reads/writes at the same time.
- There is one disk dedicated for backups and thus allowing the boot disk and/or the VMs disk to fail without VMs data loss.
The downsides of the first configuration seems to be:
- The VM disks are not in software RAID 10/mirror and thus if the VMs disk fails it creates a lot of downtime. (Since the VMs disk need to be replaced before the VMs can be restored.)
The upsides of the second configuration seems to be:
- If one of the two disks fails for the VMs, then the VMs can still continue to run since it still has one disk for the VMs. (Since they are redundant thanks to software raid 10/mirror)
The downsides of the second configuration seems to be:
- The configuration does not allow for a backups on its own disk.
The upsides of the third configuration seems to be:
- The disks for the VMs are redundant.
- There is space for local backups.
The downsides of the third configuration seems to be:
- Proxmox VE and the backups need to share the disk and thus increasing the reads/writes and as a result wearing out the SSD quicker. (And since (as far as I know) the VM settings/configs are only stored in Proxmox VE and backups, it creates a real problem if that SSD fails)
- When backups are run, it mights slow down the host/Proxmox VE since they share the same SSD.
My server configuration:
- 1x Intel Xeon E5 2630v4 (10 cores/20 threads)
- 1x 256 GB REG ECC
- 1x 1 TB SSD (No RAID) - Boot disk (Crucial MX500 1TB 3D NAND SATA 2.5-inch/CT1000MX500SSD1)
- 2 x 2 TB SSD (No RAID) (Crucial MX500 2TB 3D NAND SATA 2.5-inch/CT2000MX500SSD1)
Any advice and/or suggetions are appriciated.