client, yes backup is limited to about 512mb/s (zfs mirror on 2x intel dc4510), this is the xeon 6146 host with older PVE
NFO: Starting Backup of VM 3457 (qemu)
INFO: Backup started at 2022-08-09 04:23:19
INFO: status = running
INFO: VM Name: SNIP
INFO: include disk 'scsi0'...
hi,
im running pbs with dual 2660 v3, it seems that each backup task from pve 6.4 can achieve about 130-180MB/s,
multiple tasks at the same time adds up, so i dont think it is storage bottleneck.
pbs seems to handle poorly in smp, by this i have been thinking to swap cpu to 2643 v4...
Hi,
Im considering to build larger PBS with at least 30TB disk space,
i just wonder what kind of hardware is recommended for:
- 30TB storage with upgrade path to at least 100TB
- verify must perform extremely well ( max 12h for all data verify )
- restore should be able to saturate 10G link
-...
it was one of the first things to test, disabling compression and checksumming,
cpu should not be a bottleneck either, tested with 6242R, 5950X and 8700k
by running fio against the disk directly i do manager to achieve about 600k iops, in mirror with mdraid about 1.1M iops, with zfs it dosnt matter if single disk or mirror it seems to be capped to about 150-170k IOPS, which actually means about 80% performance lost with zfs
with 256 just want to...
please ignore different pool/zvol names, it is a different system,
i have just created the benchmark zvol, wrote random data with fio and then run fio randread benchmark for 10minutes,
the box is not yet in use.
zfs create -V 100gb zfs-local/benchmark
# zpool status
pool: zfs-local
state...
im trying to find out why zfs is pretty slow when it comes to read performance,
i have been testing with different systems, disks and seetings
testing directly on the disk im able to achieve some reasonable numbers not far away from specsheet => 400-650k IOPS (p4510 and some samsung based HPE)...
did some tests, active-backup bond with 2 ports:
a) bond/bridge across 2 broadcom network cards - error
b) bond/bridge on single dual port broadcom network card - works
b) bond/bridge across 2 intel network cards - works
c) bond/bridge across broadcom and qlogic network cards - works
it would...
simple workaround for my issue:
actually it didnt help, starting vm caused the issue to come back,
to fix it had to reconnect network ports to the same card
changing allow-hotplug to auto didnt help, same error
i have tried without updeleay - no change
strange thing is that the issue happens only when using 2 different network cards, 2 ports on the same card works just fine.
hello,
did you find a solution to this problem? im having exactly same issue,
im running the BCM57416 model,
bond with 2 ports on same card works,
bond with 2 ports split onto 2 different cards dosnt - no data available
br
hello,
how critical is it if few nodes of 20 node cluster is located on a remote site with 10 up to 20 ms latency?
there is no HA or shared storage involved.
will it work? are there some considerations? like live migration? (used rarely)
thank you.
phil
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.