Actually, lets think different, i can use no Compression, because the files gets anyway Compressed with LZ4 on the Backup-Server, at least via ZFS.
But this is still a bummer. That means simply that 1GB/s is the max Backup-Speed Limit for everyone, that don't disable Compression.
Cheers
EDIT...
Anyone here reached higher Backup-Speed as 800MB/s or 1GB/s ?
Maybe thats some sort of PBS limit
Its getting weirder!
I have created a VM with PBS on another Genoa Server, the measured write speed is 1,5GB/s inside the VM and read around 5GB/s.
But thats a ZVOL issue, that im aware of, the...
Lets start with basic tuning parameters that i use:
-o ashift=12 \
-O special_small_blocks=128k \
-O xattr=sa \
-O dnodesize=auto \
-O recordsize=1M \
Thats means logbias is default (latency) + special vdev
Testings:
logbias=latency + special vdev: INFO: Finished Backup of VM 166...
Thats still the only way to delete stupid chunck files, if you need to, lol
However, that command will loop through all files and individually touch them, which is a big loop and execution time of touch comes into play either.
I have a better idea, to speed that crap up by at least a factor of...
All Samsung Consumer "NVME" drives are pure crap. The sata versions like the 870 Evo/plus are pretty good tho. (in reliability, not iops)
But other Brands can be even worse, or better in TBW but with less speed/worse latency.
So thats why im using myself something like 970/980/990 Pros/Evos xD...
-> I checked with a script that does:
dd if=/dev/zero of=/mnt/$diskid/1m_test bs=1M count=8000 oflag=direct
the write speeds for each disk.
-> all disks support 4k Locical Block-Size and 512b, but they come shipped by default with 512b.
---> There is absolutely no performance difference between...
zpool status
pool: HDD_Z2
state: ONLINE
scan: scrub repaired 0B in 06:17:32 with 0 errors on Sun May 12 06:41:33 2024
config:
NAME STATE READ WRITE CKSUM
HDD_Z2 ONLINE 0...
War doch das gleiche spiel damals mit darkmode. Solange das Script immer geupdated/angepasst wird auf neue PVE Versionen, passts ja.
Die 2 Daten die ausgetauscht werden, sind ja auch kein problem, werden doch eh von PVE ersetzt beim update.
Einzige was bedenkenswert ist, ist sensors-detect im...
Das ist ja gigantisch!
Hoffentlich wird das integriert in Proxmox direkt genauso wie der darkmode damals, das ist ne absolut wahnsinns Erweiterung.
Jedesmal ins ILO/IPMI/IBMC zu schauen um Temperaturen zu erfahren und die scheiss Management interface sind so kake lahm, aktualisieren eine...
didnt seen that, youre right.
What platform is that? i have strange issues with Genoa and hyperthreading here, mentioned in another thread, but i still had no chance to debug further, its simply in production (makes hard to experiment with)
The hyperthreading issue i have, doesn't exist on any...
I would say, for Consumer SSD's like 870 EVO's etc... 900 is great.
For enterprise SSD's probably crap, you're right, i simply skipped on my side enterprise SSD's and gone directly to Enterprise NVME's.
So thats why i don't have any experience with Enterprise SSD's.
All new servers that i build...
You wrote you'll using 16k blocksize, you mean really volblocksize?
Im not sure if it will have some downsides for VM's (i don't think), but it should help with the needed space for metadata.
Same for recordsize, usually the larger the recordsize, the less metadata you need. But 128k (the...
Then you have no other way as using VM.
But replication is not live, what i mean is, if one server goes down, and the vm gets started on the other one, you loose 2hours of data if you set to sync every 2h for example.
Just as a sidenote.
Lets simply see in a week or so, after he got his drive and a backup.
Then he can do that without any fear and check smartctl again, or in worst case replace the drive.
120TB is lot, i don't know any downsides, but i wouldn't do that personally.
Don't understand me wrong, it will likely be just fine.
However, i would prefer using an lxc container if possible and mount the Storage directly to the lxc container. (Primary to avoid the usage of zvols)
Otherwise...
Ah thats a different story. But then i believe that the benchmark itself acts differently, it could be that random on vmware is actually urandom.
can you retest with /dev/urandom on both?
/dev/random is known to be slow, and as far as i know, its not even anywhere used on proxmox.
/dev/nvme0n1
-> Thats the namespace of the nvme, means the actual disk where data/partitions are on them.
/dev/nvme0
-> Thats the raw disk itself, you can split it into multiple namespaces, if the disk supports it, for passthrough for example, so imagine it as pcie port itself or something, and...
dont do the dd and rm -f commands separately, du it as one command, exactly as i posted above.
because the first command will write zeroes to your drive (into the zeroes file), till there is absolutely no space left, and the second will delete the zeroes to make space again.
So basically as one...
The faster one is directly on the Host, tested simply with Hirens Boot CD, so no drivers, probably not max speed, dunno.
The Second (Slower one) is inside a WS2019 VM, with all drivers etc...
There is definitively a big difference, but in my case, thats anyway all that fast that it simply...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.