Hello,
I have a running Proxmox V5 on a HP ProLiant DL380p Gen8 with 96GB RAM and 2 Xeon CPU E5-2670 0 @ 2.60GHz. Storage is 8 x 3TB 7.2k SAS HDDs on a SmartArray P420i controller (in HBA mode for zfs).
The Proxmox system was installed on an SD card, the VM data is stored in a ZFS Raid (see below). I have installed a Windows 2016 server and set up RDS services.
The latest virtio drivers of RedHat have been installed and the system is running error-free. Nevertheless, if you log on the server remotely logs, everything very sluggish and the reloading of programs takes a long time - the felt performance for the user is inadequate.
1 question:
What tools do I need to use to find the cause of the error and how do I best work to maximize the performance?
2. Question:
Is it useful to reduce the zfs pool cache and install in the free space the Proxmox? If not - how can I then install the system on an additional SSD without losing the existing virtual machines?
I have a running Proxmox V5 on a HP ProLiant DL380p Gen8 with 96GB RAM and 2 Xeon CPU E5-2670 0 @ 2.60GHz. Storage is 8 x 3TB 7.2k SAS HDDs on a SmartArray P420i controller (in HBA mode for zfs).
The Proxmox system was installed on an SD card, the VM data is stored in a ZFS Raid (see below). I have installed a Windows 2016 server and set up RDS services.
The latest virtio drivers of RedHat have been installed and the system is running error-free. Nevertheless, if you log on the server remotely logs, everything very sluggish and the reloading of programs takes a long time - the felt performance for the user is inadequate.
1 question:
What tools do I need to use to find the cause of the error and how do I best work to maximize the performance?
2. Question:
Is it useful to reduce the zfs pool cache and install in the free space the Proxmox? If not - how can I then install the system on an additional SSD without losing the existing virtual machines?
Code:
root@sv-pve:~# zpool status -v
pool: rz2pool
state: ONLINE
scan: scrub repaired 0 in 3h37m with 0 errors on Sun Aug 13 04:01:08 2017
config:
NAME STATE READ WRITE CKSUM
rz2pool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
scsi-35000cca01a834670 ONLINE 0 0 0
scsi-35000cca01a791e24 ONLINE 0 0 0
scsi-35000cca01a769f54 ONLINE 0 0 0
scsi-35000cca01a83cd30 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
scsi-35000cca01a841db0 ONLINE 0 0 0
scsi-35000cca01a832f30 ONLINE 0 0 0
scsi-35000cca01a69f22c ONLINE 0 0 0
scsi-35000c50041ca84ab ONLINE 0 0 0
logs
mirror-2 ONLINE 0 0 0
nvme-Samsung_SSD_960_EVO_250GB_S3ESNX0J315970N-part1 ONLINE 0 0 0
nvme-Samsung_SSD_960_EVO_250GB_S3ESNX0J315973V-part1 ONLINE 0 0 0
cache
nvme-Samsung_SSD_960_EVO_250GB_S3ESNX0J315970N-part2 ONLINE 0 0 0
nvme-Samsung_SSD_960_EVO_250GB_S3ESNX0J315973V-part2 ONLINE 0 0 0
errors: No known data errors
pool: zspool
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Aug 13 00:24:17 2017
config:
NAME STATE READ WRITE CKSUM
zspool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
s1-part3 ONLINE 0 0 0
s2-part3 ONLINE 0 0 0
errors: No known data errors
Code:
root@sv-pve:~# pvesm status
Name Type Status Total Used Available %
local dir active 14900200 7956460 6167140 53.40%
rz2pool-ct zfspool active 5947140075 139 5947139936 0.00%
rz2pool-iso dir active 5956548864 9409024 5947139840 0.16%
rz2pool-vm zfspool active 10941639743 4994499807 5947139936 45.65%
Code:
root@sv-pve:~# pveperf
CPU BOGOMIPS: 165988.80
REGEX/SECOND: 709588
HD SIZE: 14.21 GB (/dev/mapper/pve-root)
BUFFERED READS: 13.64 MB/sec
AVERAGE SEEK TIME: 93.60 ms
FSYNCS/SECOND: 28.10
DNS EXT: 36.27 ms
DNS INT: 1.24 ms (example.com)
Code:
root@sv-pve:~# pveperf /rz2pool/vm
CPU BOGOMIPS: 165988.80
REGEX/SECOND: 715164
HD SIZE: 5674.48 GB (rz2pool/vm)
FSYNCS/SECOND: 784.55
DNS EXT: 35.56 ms
DNS INT: 1.27 ms (example.com)
Last edited: