if you are sure, that the device are not used, you can follow this guide i wrote for me:
dmsetup info /dev/sata/vm-210-disk-1
Read Ahead: 256
Tables present: LIVE
Open count: 6
Event number: 0
I want simply add an new created VM with pvesh to an pool (like "pvesh add pools/Dev -members 123").
OK, pvesh don't know add - but my try with set are not successfull.
How is the right syntax?
die "bloße Aussage" von LnxBil kommt nach meiner Meinung daher, weil "messungen" mit dd nicht für alle Storage-Type vergleichbar sind.
So wird bei zfs z.b. komprimiert - und mit dd aus /dev/zero auf ein zfs-Filesystem zu schreiben, bringt zwar tolle Werte, aber die haben nichts mit der...
@guletz: I will try volblock-sizes later.
With the new Kernel, the test takes 40m27.5s and the load looks much better.
proxmox-ve: 6.0-2 (running kernel: 5.0.21-1-pve)
pve-manager: 6.0-6 (running version: 6.0-6/c71f879f)
if I understand you right, your read the compredd data from the same zfs-pool (inside an VM), where you write the output?
If you look for the throughput: (read 14001635328 bytes, duration 75 sec - read 7000817664 bytes, duration 31 sec)/44/(1024*1024) you got 151MB/s. (tihs is readed...
I've installed pve-6 successfull on a 6 * 6TB-4kn-hdd raidz2 with the help of an single disk (with 512b sectorsize) and manual reconfigure (take some time).
If you wan't I can post the HowTo...
if you look here: https://www.seagate.com/enterprise-storage/exos-drives/exos-e-drives/exos-7e8/
you see some models with 4k sector size and some with emulated 512bytes.
I assume you have an 4k-model too.
Because raid-0: are your backup not important? If yone disk die - all backups...
I have the same issue with different 6TB-disks.
Looks that the pve-installer can't handle native 4k sectors correct!
Has your disks 4k sectors (4kn) or emulated 512b (512e)?
BTW. zfs raid 0 is an bad idea
sorry - you must measure caching!
With an 5-OSD HDD cluster on 1GB-Network you will never ever got 80-100MB thorughput in an VM / single thread.
Ok, with replica 3 and 5 OSDs you can got 100/(5/3) = 60MB inside an VM with 100MB/s per OSD - but I don't think that you reach such values!
I would NOT use ceph-mons outside the pve-cluster, only osd-nodes (which is fine, if you have enough resources).
Esp. ceph-mons should have the same (and the newest version) - osds are not so critical...
if you look here: https://docs.ceph.com/docs/jewel/start/hardware-recommendations/ you see, that you need min 3GB free ram for one OSD.
And newer versions need more ram - see also here: https://unix.stackexchange.com/questions/448801/ceph-luminous-osd-memory-usage?rq=1
second - linux use...