Started reading every Wiki page and a bunch of forum posts.. but the "best practices" for a stable production server are a little vague.
Does anybody have any recommendations for setting for sysctl? network? Kernel tweaks? or Do I need to pay somebody for this kind of detail? and support?
Is...
Several customers have asked us how to get the best possible storage latency out of Proxmox and QEMU (without sacrificing consistency or durability). Typically, the goal is to maximize database performance and improve benchmark results when moving from VMware to Proxmox. In these cases...
We have a 5-Host hyperconverged Proxmox 7.1 with CEPH as VM storage (5 SSD OSDs per host). My understanding is that CEPH I/O highly depends on available CPU power
Would it make sense to prioritize the OSD processes over any VM process to have (near-)full CPU power available for CEPH even under...
Just a little question : my PBS is configured using ZFS and compression has been left to default which is "on" and "local" which stands for "lz4".
Shall this be left to the default "on" value ?
Is there any interest in using compression with PBS (= isn't PBS using it's own compression - in...
Hi All,
I've a ceph Cluster with 3 nodes HPE each with 10xSAS 1TB and 2xnvme 1TB below the config.
The replica and ceph network is 10Gb but the performance are very low...
in VM I got (in sequential mode) Read: 230MBps Write: 65MBps
What I can do/check to tune my storage environment?
# begin...
I just started testing the pbs backup client for some advanced backup scenarios. One question of course is how to get the maximum performance out of the server that creates backups.
In multiple larger infrastructures there are so called 'backupworkers' (VMs) who have plenty of CPU and RAM as...
Hi,
I wonder if anyone has experience and can comment maybe.
I've just spent some time reviewing a pair of lenovo servers, which have this HW Raid controller. 2 x identical nodes in a small proxmox cluster, proxmox 5.Latest.
There is no problem with the controller being recognized and...
Hello,
if i have to create a ZFS pool on a couple of SATA HD or SSD now i use:
zpool create -f -o ashift=12 rpool mirror /dev/sda /dev/sdb
Question:
is it correct or have i to add some ZFS tuning option like "compression=on" to have best I/O performance ?
Thanks
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.