Im considering to build larger PBS with at least 30TB disk space,
i just wonder what kind of hardware is recommended for:
- 30TB storage with upgrade path to at least 100TB
- verify must perform extremely well ( max 12h for all data verify )
- restore should be able to saturate 10G link
by running fio against the disk directly i do manager to achieve about 600k iops, in mirror with mdraid about 1.1M iops, with zfs it dosnt matter if single disk or mirror it seems to be capped to about 150-170k IOPS, which actually means about 80% performance lost with zfs
with 256 just want to...
please ignore different pool/zvol names, it is a different system,
i have just created the benchmark zvol, wrote random data with fio and then run fio randread benchmark for 10minutes,
the box is not yet in use.
zfs create -V 100gb zfs-local/benchmark
# zpool status
im trying to find out why zfs is pretty slow when it comes to read performance,
i have been testing with different systems, disks and seetings
testing directly on the disk im able to achieve some reasonable numbers not far away from specsheet => 400-650k IOPS (p4510 and some samsung based HPE)...
did some tests, active-backup bond with 2 ports:
a) bond/bridge across 2 broadcom network cards - error
b) bond/bridge on single dual port broadcom network card - works
b) bond/bridge across 2 intel network cards - works
c) bond/bridge across broadcom and qlogic network cards - works
changing allow-hotplug to auto didnt help, same error
i have tried without updeleay - no change
strange thing is that the issue happens only when using 2 different network cards, 2 ports on the same card works just fine.
did you find a solution to this problem? im having exactly same issue,
im running the BCM57416 model,
bond with 2 ports on same card works,
bond with 2 ports split onto 2 different cards dosnt - no data available
how critical is it if few nodes of 20 node cluster is located on a remote site with 10 up to 20 ms latency?
there is no HA or shared storage involved.
will it work? are there some considerations? like live migration? (used rarely)
i would try to reset bios settings, there is no need to change anything maybe beside of power options.
p420 in hba mode, afaik, dosnt allow to boot from it, you will need some other boot media - at least for grub
ve 5 and ve 6 works just fine on gen8
Hello Proxmox Staff,
What are offical recommendetions for SAN Storage?
if i look at https://pve.proxmox.com/wiki/Storage
then the only option is LVM but it dosnt offer snapshots,
is there any other solution with block level and snapshots on FC LUN ?
Hello Proxmox Staff,
im currently managing proxmox ve 5 cluster with 20 nodes without shared storage,
currently im in process of adding another 4 nodes.
im also thinking about moving to ve 6.2,
i estimate that moving all vms will take up to 4 weeks (offline move with maintenance window)...