Hello,
I replaced the SSD for ZFS log by "Samsung SM883 240GB 2.5in Data Center SATA 6Gbps"
Seq. Read 540 MB/s Seq. Write 480 MB/s
Ran. Read 97K IOPS Ran. Write 22K IOPS
Not really a big improvement...
Uploaded 110 chunks in 5 seconds.
Time per request: 46514 microseconds...
@spirit
indeed, the performance of the Crucial is not very good in writing:
Sustained Sequential Write up to (128k transfer) 95MB/s
Sustained Sequential Read up to (128k transfer) 500MB/s
I read that we recommend a small capacity SSD drive, and I took the 1st small size drive that I found...
Hello spirit,
I will reconfigure my disks in Raid10 and do some tests.
For the CPU .. are you sure AES is not supported? Intel says this:
Intel® AES New Instructions Yes
For the SSD log, I did not see any improvement with it, and when I monitor with iostat, I do not see any activity.
Hello,
I just set up a 1st PBS and I have very poor performance:
Uploaded 99 chunks in 5 seconds.
Time per request: 51253 microseconds.
TLS speed: 81.83 MB/s
SHA256 speed: 188.74 MB/s
Compression speed: 317.88 MB/s
Decompress speed: 582.48 MB/s
AES256/GCM speed: 112.44 MB/s
Verify speed...
Ok I think I figured out how to do it.
I deleted the "storage0" datastore which pointed to the ZFS path "/mnt/datastore/storage0"
and I created a datastore "pve1" to /mnt/datastore/storage0/pve1 and datastore "pve2" /mnt/datastore/storage0/pve2
is that the right way to go?
I admit having thought of this possibility but the number of physical hard disks is not unlimited...
I made a RAID-Z1 with 3 disks for my 1st datastore.
How to make a 2nd?
Hello,
Hope this question has not been addressed yet, I checked before but found nothing.
I have 2 PVE and a 1 PBS with 1 datastore.
I connected 2 PVEs to the PBS.
The 1st PVE is mine, the 2nd is not mine.
PVE 1 sees the backups of 2.
PVE 2 sees the backups of 1.
Is it possible to isolate...
I do not understand because the quorum detected my node "down" rather quickly.
But the HA is really slow to launch the VM/CT on another node. I searched everywhere on the side of HA, and I do not see any modifiable value.
Hello,
I just realized a test lab with 3x Proxmox 5.4 with Ceph and HA.
I configured watchdog on the IPMI on 10 seconds.
In spite of that, if I disconnect a node, the other 2 take almost 3 minutes to see it and restart the VM.
Is it possible to optimize this time to less than 1 minute ?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.