Search results

  1. R

    ZFS Datastore on PBS - ZFS Recordsize/Dedup...

    Actually, lets think different, i can use no Compression, because the files gets anyway Compressed with LZ4 on the Backup-Server, at least via ZFS. But this is still a bummer. That means simply that 1GB/s is the max Backup-Speed Limit for everyone, that don't disable Compression. Cheers EDIT...
  2. R

    ZFS Datastore on PBS - ZFS Recordsize/Dedup...

    Anyone here reached higher Backup-Speed as 800MB/s or 1GB/s ? Maybe thats some sort of PBS limit Its getting weirder! I have created a VM with PBS on another Genoa Server, the measured write speed is 1,5GB/s inside the VM and read around 5GB/s. But thats a ZVOL issue, that im aware of, the...
  3. R

    ZFS Datastore on PBS - ZFS Recordsize/Dedup...

    Lets start with basic tuning parameters that i use: -o ashift=12 \ -O special_small_blocks=128k \ -O xattr=sa \ -O dnodesize=auto \ -O recordsize=1M \ Thats means logbias is default (latency) + special vdev Testings: logbias=latency + special vdev: INFO: Finished Backup of VM 166...
  4. R

    [SOLVED] Force delete of "Pending removals"

    Thats still the only way to delete stupid chunck files, if you need to, lol However, that command will loop through all files and individually touch them, which is a big loop and execution time of touch comes into play either. I have a better idea, to speed that crap up by at least a factor of...
  5. R

    [SOLVED] NVME disk "Available Spare" problem.

    All Samsung Consumer "NVME" drives are pure crap. The sata versions like the 870 Evo/plus are pretty good tho. (in reliability, not iops) But other Brands can be even worse, or better in TBW but with less speed/worse latency. So thats why im using myself something like 970/980/990 Pros/Evos xD...
  6. R

    ZFS Datastore on PBS - ZFS Recordsize/Dedup...

    -> I checked with a script that does: dd if=/dev/zero of=/mnt/$diskid/1m_test bs=1M count=8000 oflag=direct the write speeds for each disk. -> all disks support 4k Locical Block-Size and 512b, but they come shipped by default with 512b. ---> There is absolutely no performance difference between...
  7. R

    Proxmox problem with memory limits in ARC (ZFS)

    zpool status pool: HDD_Z2 state: ONLINE scan: scrub repaired 0B in 06:17:32 with 0 errors on Sun May 12 06:41:33 2024 config: NAME STATE READ WRITE CKSUM HDD_Z2 ONLINE 0...
  8. R

    Warnung

    https://forum.proxmox.com/threads/wrmsr-messages-on-proxmox-8-1.138145/
  9. R

    PVE und HPE Proliant DL380 Gen 9, speziell Sensoren

    War doch das gleiche spiel damals mit darkmode. Solange das Script immer geupdated/angepasst wird auf neue PVE Versionen, passts ja. Die 2 Daten die ausgetauscht werden, sind ja auch kein problem, werden doch eh von PVE ersetzt beim update. Einzige was bedenkenswert ist, ist sensors-detect im...
  10. R

    PVE und HPE Proliant DL380 Gen 9, speziell Sensoren

    Das ist ja gigantisch! Hoffentlich wird das integriert in Proxmox direkt genauso wie der darkmode damals, das ist ne absolut wahnsinns Erweiterung. Jedesmal ins ILO/IPMI/IBMC zu schauen um Temperaturen zu erfahren und die scheiss Management interface sind so kake lahm, aktualisieren eine...
  11. R

    Ram Benchmarking Speed (with processing) declined on moving to Proxmox from ESXI on same hardware

    didnt seen that, youre right. What platform is that? i have strange issues with Genoa and hyperthreading here, mentioned in another thread, but i still had no chance to debug further, its simply in production (makes hard to experiment with) The hyperthreading issue i have, doesn't exist on any...
  12. R

    Worse performance with higher specs server

    I would say, for Consumer SSD's like 870 EVO's etc... 900 is great. For enterprise SSD's probably crap, you're right, i simply skipped on my side enterprise SSD's and gone directly to Enterprise NVME's. So thats why i don't have any experience with Enterprise SSD's. All new servers that i build...
  13. R

    Proxmox problem with memory limits in ARC (ZFS)

    You wrote you'll using 16k blocksize, you mean really volblocksize? Im not sure if it will have some downsides for VM's (i don't think), but it should help with the needed space for metadata. Same for recordsize, usually the larger the recordsize, the less metadata you need. But 128k (the...
  14. R

    Proxmox problem with memory limits in ARC (ZFS)

    Then you have no other way as using VM. But replication is not live, what i mean is, if one server goes down, and the vm gets started on the other one, you loose 2hours of data if you set to sync every 2h for example. Just as a sidenote.
  15. R

    [SOLVED] NVME disk "Available Spare" problem.

    Lets simply see in a week or so, after he got his drive and a backup. Then he can do that without any fear and check smartctl again, or in worst case replace the drive.
  16. R

    Proxmox problem with memory limits in ARC (ZFS)

    120TB is lot, i don't know any downsides, but i wouldn't do that personally. Don't understand me wrong, it will likely be just fine. However, i would prefer using an lxc container if possible and mount the Storage directly to the lxc container. (Primary to avoid the usage of zvols) Otherwise...
  17. R

    Ram Benchmarking Speed (with processing) declined on moving to Proxmox from ESXI on same hardware

    Ah thats a different story. But then i believe that the benchmark itself acts differently, it could be that random on vmware is actually urandom. can you retest with /dev/urandom on both? /dev/random is known to be slow, and as far as i know, its not even anywhere used on proxmox.
  18. R

    [SOLVED] NVME disk "Available Spare" problem.

    /dev/nvme0n1 -> Thats the namespace of the nvme, means the actual disk where data/partitions are on them. /dev/nvme0 -> Thats the raw disk itself, you can split it into multiple namespaces, if the disk supports it, for passthrough for example, so imagine it as pcie port itself or something, and...
  19. R

    [SOLVED] NVME disk "Available Spare" problem.

    dont do the dd and rm -f commands separately, du it as one command, exactly as i posted above. because the first command will write zeroes to your drive (into the zeroes file), till there is absolutely no space left, and the second will delete the zeroes to make space again. So basically as one...
  20. R

    Ram Benchmarking Speed (with processing) declined on moving to Proxmox from ESXI on same hardware

    The faster one is directly on the Host, tested simply with Hirens Boot CD, so no drivers, probably not max speed, dunno. The Second (Slower one) is inside a WS2019 VM, with all drivers etc... There is definitively a big difference, but in my case, thats anyway all that fast that it simply...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!