Search results

  1. F

    Datastore mit Festplatten

    Ich verwende recordsize=4M und erziele aktuell mit zstd eine compressratio von 1.21x bei ca. 1.2 TB im datastore, ca. 2200 Backups, Deduprate 111.32x. Pool ist ein MIrror aus 2x 4TB Enterprise HDDs, kein Special Device oder Sonstiges mit dran. Bin mit der Gesamtleistung absolut zufrieden...
  2. F

    VM HDD read write speed about 25% less than direct on node speed

    If you change the SCSI Controller to "VirtIO SCSI single" and enable IO Thread on your VM Disks, it can improve IO performance when having multiple virtual disks per VM. On top of Ceph, consider using cache=writeback to help with performance. According to Proxmox' benchmarks it can drastically...
  3. F

    [SOLVED] Stats from last Garbage Collection

    Bei mir habe ich ein ähnliches Konstrukt @home und @wörk laufen. Mal anhand meines privaten PBS demonstriert: Es gibt das ZFS dataset rpool/datastore. Dort habe ich eine Quota gesetzt um sicherzustellen, dass nicht eines Tages der Pool überschwappt, 20% Buffer für ZFS & OS. Außerdem diverse...
  4. F

    VM HDD read write speed about 25% less than direct on node speed

    Cached reads are in no way a good performance indicator, you can basically ignore those numbers. Also consider using fio for IO benchmarking. Proxmox provides some commands and numbers for this: https://www.proxmox.com/de/downloads/item/proxmox-ve-zfs-benchmark-2020 Also, what is your VM...
  5. F

    Opt-in Linux Kernel 5.15 for Proxmox VE 7.x available

    Running smooth so far on a Ryzen 1700X. GPU passthrough of a GTX1060 works. No issues at all, so far. :)
  6. F

    Considering Proxmox for hosting - What are your thoughts

    Yes, but be aware that you can only use the bandwidth of a single 10G port *per connection*, even if you are bonding multiple 10G ports using LACP. Considering that you are doing many VMs (hosting), your parallel performance is more important I think, which can be improved by using the right...
  7. F

    Proxmox CEPH performance

    I know that feeling. :rolleyes: :D Checkout this PDF: https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/ There is a comparison of a handful of SSDs, including a Samsung EVO. It's ridiculous, the Intel 3510 are not the newest drives by any...
  8. F

    Proxmox CEPH performance

    Other users and proxmox staff could provide a more educated answer on this, I think. But you can read this in many threads here and I also made the personal experience that there is a very noticeable difference between QLC and TLC/MLC drives. For usecases like ZFS and Ceph there is also that...
  9. F

    PBS scaling out storage

    Ah, it looks like I misremembered. I've found the commit I was thinking about: https://git.proxmox.com/?p=proxmox-backup-qemu.git;a=commit;h=6eade0ebb76acfa262069575d7d1eb122f8fc2e2 But that is about a backup restores, not verifys. Overall, I dont see any magically performance "fix" coming...
  10. F

    Proxmox CEPH performance

    The v300 is MLC NAND while the QVO are QLC. MLC is way more durable and provides better write performance than QLC NAND in general.
  11. F

    PBS scaling out storage

    Ah, I understand. Well, building multiple smaller servers that provide NFS shares may be a more appropriate solution then. Or, as long as its feasible, upgrading the existing HDDs to bigger ones. Why? ZFS can handle dozens of disks properly and with new features like dRAID even rebuilds kann be...
  12. F

    PBS scaling out storage

    You can buy and deploy those JBODs on demand, no need to keep empty cabs around. If you are fine with multiple datasources (NFS share per datastore) anyway, you have lots of additional possibilities anyway. It sounded like you really want to have one single machine serve all your datastores, at...
  13. F

    PBS scaling out storage

    Growing a ZFS pool sounds like a good solution to me, for quite some time. One can add a hell lot of disks using some JBODs. And nowadays there are pretty large HDDs, too. If that doesn't fit, building a ceph cluster could work for further scaling, but if you expect to reach 20TB in like 2-3...
  14. F

    Considering Proxmox for hosting - What are your thoughts

    I'd suggest going with 5 nodes rather than 4, as you can only loose a single node before things get dangerous, having 5 nodes allows you to loose 2 nodes while still having a stable cluster. Your NICs are only capable of 10G, considerung your powerful NVMe drives, you'd want a more powerful...
  15. F

    Type of HDD for near zero budget implementation

    Toshiba N300 never let me down for such tasks. Built for 24/7 use and a quite performing HDD in general. As your storage will be "always on", ZFS and PBS will do regular health checks on the data, so even in the long term (whatever timeframe that may be?) the data should be safe. Having ECC RAM...
  16. F

    [SOLVED] Recommendation small Ceph setup

    Better use 2 of the SAS HDDs for OS instead of the consumer SSDs as those would wearout pretty quick. I'd use 2 ceph pools, one using the SM883's as those are proper datacenter drives and another pool using the EVOs. Just be aware that the EVOs may not live very long, depending on your write...
  17. F

    Proxmox Cluster mit Ceph korrektes vorgehen bei Clusterausfall

    2 von 3 Maschinen zeitgleich defekt - also technisch defekt - ist unwahrscheinlich. Was eher passiert: Du hast eine Node zu Wartungszwecken / Updates offline und evtl. dadurch in keinem einsatzfähigen Zustand und während dieser Wartungsarbeiten fällt plötzlich eine Node wegen technischen Defekts...
  18. F

    [SOLVED] All nodes with VMs crash during backup task to Proxmox Backup Server

    Had this issue twice in the past on a standalone node, but not in the last few pve versions, so updating your nodes may help. I dont remember the exact versions where this happened, though.
  19. F

    Does the graphical installer support advanced ZFS setups for VDEVs and SPAREs?

    The Installer is pretty basic compared to what ZFS can offer. Most of the time you have 2 separate boot drives and choose ZFS RAID1 in the installer. Stuff like compression and ashift can be selected right away. More complex stuff must be done via the CLI, like adding spares. You can add a...
  20. F

    zfs 2.1 roadmap

    *sad ZFS noises* :D If you have the infrastructure to run Ceph in a serious manner thats a great alternative tho! @t.lamprecht Just booted into 5.13 on my workstation, running PVE & PBS together. So far, nothing explodes. :D ZFS also allows me to upgrade my pools to use the new 2.1 dRAID: