Search results for query: ZFS QLC

  1. I

    Homeserver Rebuild

    Nöö. Deaktiverter SWAP oder ZRAM ist zwar besser, aber noch lange kein Grund für den Kill einer SSD. Eigentlich haben wir hier ja eine seltsame Beweislastumkehr. Du behauptest ohne Begründung, ZFS würde für wearout sorgen. Nicht ich muss dir Begründen, warum das ein Märchen ist, du musst uns...
  2. B

    IO Backpressure, mein Fehler

    die bx500 sind richtig übel. unterste schublade für proxmox zusammen mit sämtlichen qlc ssds ohne dram cache. du könntest, wenn du schon nicht auf enterprise ssds umstellen willst, zumindest bessere consumer-ssds verwenden. hier gilt in der regel "you get, what you pay for", also je billiger...
  3. leesteken

    ZFS Storage Inactive Since Reboot

    With SSDs without PLP, it's common that data is lost by an unexpected power loss as they shuffle data around (like trim and flushing SLC cache to TLC/QLC flash) all the time. This is not specific for ZFS but can happen with all filesystems. I don't know why it would happen with a HDD but maybe...
  4. G

    Server-Disk I/O delay 100% during cloning and backup

    if these are Samsung QVO, they are not suitable at all. QLC drives write very slow after cache (which is different from DRAM cache). There is plenty topics on the forum about it, mainly with ZFS because it reveals faster the slowness.
  5. L

    First backup of a single vm, to PBS runs VERY VERY slowly

    I'll try to be brief, I scratched all storage in my PVE and re-installed with a dedicated 500GB sata ssd as a xfs disk as you suggested (Thanks). Then defined the 2 TB nvme drive as local-xfs used as vm and lxc storage. The second 500GB sata ssd is empty and awaiting use as future needs arise...
  6. Max Carrara

    First backup of a single vm, to PBS runs VERY VERY slowly

    You're welcome! Yeah these things can be a bit (actually, quite) annoying—especially in the case of SSDs. Sometimes these issues show up even when you think you've ruled every problem out through extensive testing and benchmarking. SSDs with a fast SLC cell cache and slow QLC cells otherwise are...
  7. G

    Hardware advice (or "questionable Proxmox performance on nice box")

    Rack server CPUs are SLOW. https://www.cpu-monkey.com/en/compare_cpu-intel_core_i7_4790k-vs-intel_xeon_gold_6140 This is what you will get. Basically what counts is the speed per single core. Your cpu may run a lot of VMs and it will NOT SLOWDOWN. But it won't be fast. Another issue I've seen...
  8. K

    Hardware advice (or "questionable Proxmox performance on nice box")

    > If I started all over again (which I think your solution implies) and did separate zfs pools, mightn't that just spare the users in group 2 and 3 from the disruption if I did a restore of something in group 1? That would be another net benefit, yes. Isolating different groups of users to...
  9. leesteken

    ZFS device fault

    It's most likely SMR if it's a HDD but it might be CMR, and it might be TLC or QLC if it's a SSD. Either way, it's cheap and probably not suitable for ZFS or any other CoW filesystem. Please show the output of zpool status before and/or after a scrub. It's not just you but posts like these make...
  10. leesteken

    ZFS device fault

    What kind of fault? What does zpool status actually report (in CODE-tags)? It really depends on read, write or cksum. Are you using QLC or SMR drives then please search the forum about the issues they cause with ZFS. Even brand new SSDs can be terrible for use with ZFS and can also be broken (or...
  11. B

    ZFS SSD High IO Wait

    The theory was double redundancy for hardware failure but the IO performance is such a punish I would be better off moving to 2 separate mirrors. Noted, I started with code then second guessed myself and changed it to quotes. Back to the drawing board to figure out the best way forward....
  12. leesteken

    ZFS SSD High IO Wait

    RAIDz2 is not at all like hardware RAID6 (with BBU). Since you only have 4 drives, why not change it to a stripe of mirrors (which is like RAID10) and you'll improve the IOPS a lot? However, BX500 drives are a poor choice with ZFS due to the QLC flash and might give you write errors due to...
  13. news

    Proxmox Homeserver, ZFS, AMD Mainboard B550, AMD Ryzen 5000 und 3000 CPU, SSD, HDD, NVMe 4.0 x4

    Ohne GPU läuft das Mainboard nicht und mit einer normalen Ryzen APU kann man kein ECC DDR4 Ram nutzen. Dazu benötigt man dann die Ryzen 5000 PRO Ausführungen. Doch ist da und deshalb auch meine Ausführung, dass man 5x NVMe PCIe 4.0 nutzen kann, wenn da nicht die Abwärme der Datenträger wäre...
  14. ugf

    Proxmox Homeserver, ZFS, AMD Mainboard B550, AMD Ryzen 5000 und 3000 CPU, SSD, HDD, NVMe 4.0 x4

    Wofür die GPU ? mit der iGPU wirst du nicht wirklich was reißen (ki), und afaik war passthrough von iGPU ohnehin schwierig, bzw. eig. unmöglich. (Mein Stand ist jetzt aber auch ein paar Jahre her). 5800x/5900x gibs schon für unter 300 EUR im Angebot. Wenn Strom keine Rolle spielt. . CPU spielt...
  15. I

    NUC + Crucial P5 Plus NVMe overheating → need advice for reliable 4TB SATA SSD for Proxmox

    Hi everyone, I’ve built a Proxmox server on an Intel NUC 12 (i7-12650H, 10 cores / 16 threads) with 64 GB DDR4 Crucial 3200 MHz RAM. For storage I currently use: NVMe Crucial P5 Plus 2 TB (ZFS, small/light VMs) SATA Samsung 860 QVO 1 TB (main VMs) USB 1 TB drive (used for occasional backups)...
  16. E

    Poor NFS Performance To TrueNAS Share

    Well damn, I had sort of thought storage devices might be the case and not so much the network, but wanted to collect some additional intel. Would the Samsung 970 EVO's be appropriate replacements? I'm not looking for the most top end of SSD's (I have a budget to consider) but I also would like...
  17. G

    Poor NFS Performance To TrueNAS Share

    :eek: Crucial BX500 are the slowest disks you can find, they writes data slower than HDD, except the first GBs, they use QLC flash. Running ZFS over these is the best to test the worst case. EDIT: I don't see network problem as iPerf show 9,3 Gbits/s for your 10Gb network as excepted.
  18. A

    Proxmox VMs freezing up with high IO delay at exactly the same time every day for exactly 5 mins

    The ZFS pool originally had 5 x WD Blue SSDs: WDS100T3B0A-00AXR0 I replaced 2 of them with 2 x WD Red SSDs: WDS100T1R0A I've googled them and I don't see anything about them having QLC flash memory, but I can't tell for sure, it's too confusing. I guess I could try putting in 2 x Kingston...
  19. leesteken

    Proxmox VMs freezing up with high IO delay at exactly the same time every day for exactly 5 mins

    What drives exactly? ZFS with QLC flash memory drive will slow down to speeds below old rotating HDDs and people refuse to believe it until they experience it themselves and even then it takes some convincing (with is no fun for both parties). If this is the case then search for QLC on this...
  20. leesteken

    No "raid1" option in ZFS?

    =1&c[nodes][0]=16&o=date']Search the forum for QLC and you will find that ZFS does not work well with those kind of drives. Other filesystems also have issues on sustained writes but ZFS with sync writes and write amplification always runs into trouble. Everybody who buys QLC drives creates a...