Search results for query: wearout consumer ssd disk

  1. G

    Deciding between Proxmox and VMWare

    it's bad idea to use (moreover in production) a NOT datacenter (or vendor ssd, which are datacenter) behind a hw controller. disk cache is disabled from hw controller, this allow hotplug and force the usage of the hw controller cache. cache of consumer ssd is mandatory to not wearout too...
  2. J

    Proxmox storage architecture

    Good evening, Thank you for your answers. I agree that the storage is a bit limited now in relation to the rest of the system. This is because I started out with the idea to just host a "simple" hobby homelab. But along the way I kept thinking of other usages, such as Jellyfin, Nextcloud (whish...
  3. G

    Help with vm Win srv 2019 too slow

    p440 has its own write battery backed cache ( if not present perfs are bad ) so better set Disk VM cache to writeback. BBWC is enabled by default on array/disk but hw controller disable disk write cache, so consumer ssd are slow in a hw raid and wearout will be too fast. imo, a consumer ssd like...
  4. W

    SSD wearout and rrdcache/pmxcfs commit interval

    Hello, I'm installed from some weeks Proxmox VE 7.3 (just updated to 7.4) to a HP ProDesk 600 G3 mini PC for my homelab (testing PVE because in the near future I intend to propose it to a business which I collaborate with, obviousy on serious hardware). Actually, for homelab I installed Debian...
  5. B

    Bluestore SSD wear

    I don't fully understand the concept of "4-6 OSD's per WAL/DB" disk, You mean that on a single SSD , I should partition/assign only 4-6 OSD's ? For this setup, I put 35 OSD's per WAL/DB ssd disk :) For other clusters, I put 6 OSD's per NVME. I use two Intel Datacenter 960Gb ssd that I had...
  6. Dunuin

    Minimizing SSD wearout question ?

    While disabling sync writes decreases the SSD wear I highly recommend you set it to default again. Disabling it might cause data loss and in worst case you will lose your entire pool (for example on the next power outage or kernel crash). Similar problem with increasing the transaction group...
  7. T

    VM's very slow

    This sounds like it could be one of the VM's but you'd have to pin it down. Also, those Samsung QLC SSDs are likely to wear out as a consumer disk. If you go to Datacentre>Node>Disks you will see the wearout percentage for each SSD if the SSDs are not obscured behind a RAID controller. If...
  8. D

    RAID with NVME and SATA SSD?

    I agree for the part that raid ZFS over consumer disk is a bad idea. i try and the wearout is massive. So i go back to my "old" install of proxmox over a raid 1 debian with mdadm, with a nvme disk and a Sata. The cool stuf with it is the option "write mostly" ...
  9. LnxBil

    VM storage on ZFS pool vs ZFS dataset

    Unfortunately, especially for PVE itself, this poses a problem. PVE writes a lot, so you have to monitor your wearout. The Samsung Pro are better than any other consumer SSD, but still not as good as an enterprise SSD. You will also throw away 240 GB of space (PVe itself is every small) per...
  10. H

    [SOLVED] 3 node ceph - performance degraded due to bad disk? affecting other pool? crushmap?

    Hi, I am Hans, I am using Proxmox for quite some time now and often found valuable help (reading) this community. Thanks a lot for so much valuable information! Today I have some questions which I could not help myself, so I am posting my 1st post :-) I recently inherited a 3 node...
  11. Dunuin

    High SSD wear after a few days

    To quote the PVE ZFS Benchmark paper FAQ page 8 again: ZFS has a high overhead, especially if you got DBs that do alot of small sync writes. Then the virtualization, nested filesystems etc add overhead too and this resulting write amplification is multiplying not adding up so you get...
  12. mira

    Nach Neustart Bootloader und VM defekt. NVMe defekt?

    Wie siehen denn die SMART Werte aus, wenn du im GUI auf den Host -> Disks -> Show S.M.A.R.T. Values gehst? Wie sieht der `Wearout` der Disk aus? Grundsätzlich nutzen sich Consumer SSDs recht schnell ab, also je nachdem wie viel geschrieben wurde, und wie die Qualität dieser SSD ist, kann es...
  13. Dunuin

    Deskmini X300 Proxmox Server Configuration

    Swappiness is only swapping out the read cache from RAM to disk. RAM is a volatile storage so you will always loose everything in it on an power outage or kernel crash. Async writes will use the RAM as write cache but you will also loose everything thats cached in it. So async writes are always...
  14. D

    Given my modest hardware, what is the best HA configuration?

    Wow, thank you very much for sharing your story. Indeed it is easy to go down the rabbit hole. I thought my mechanical keyboard addiction was expensive... until I discovered home lab. Dam it, each hobby is more expensive than the previous. My main limitation is that I want to keep everything on...
  15. I

    Proxmox VE management GUI doing a fair amount of writing to disk

    We know that. Problem is that there would not be any syncing every 5 seconds unless there were data in need to be written out to disk every 5 seconds. We just find it unusual that, if it were just logging, some of the syncing, sometimes, did not happen after, say, 10 or 15 seconds. But...
  16. fabian

    Proxmox VE management GUI doing a fair amount of writing to disk

    that's probably just ZFS syncing out the async writes (the default maximum time for a ZFS transaction group is exactly 5 seconds). it is possible to adjust this (at the risk of losing more of that not-yet-written-out data in case of a crash/power loss/..). the thing is - while you are...
  17. I

    Help expanding root ZFS rpool partition (self.Proxmox)

    Thanks for your reply. And good point. I'd really like to prevent having to reinstall, since that caused a lot headaches the first time (i.e. a Sandybridge node causing the PVE installer to crash because of bugs in the Linux Intel graphics driver which will never be fixed). That's why I went...
  18. LnxBil

    [SOLVED] Slow ZFS performance

    No, your SSD is worn out, so that is a problem and it may fail soon. Monitoring the SMART values and act accordingly: so replace SSDs or make sure that you will not have two identical worn out disks that may fail in short order destroying your data. You can go with consumer grade SSDs, but for...
  19. N

    [SOLVED] Slow ZFS performance

    Hi @LnxBil, I am experiencing slow performance on a ZFS RAID1 pool: # zpool status pool: rpool state: ONLINE scan: scrub repaired 0B in 0 days 00:19:18 with 0 errors on Sun Jun 14 00:43:19 2020 config: NAME STATE READ WRITE CKSUM...
  20. N

    OS disk HDD or SSD?

    Thanks for the reply. Yes i think i will simply buy some cheap spinners out of ebay for the rpool since they are dirt cheap. Thanks for the tips regarding ZFS. I will be using UPS and intel SSDs with capacitors for the VM storage (not-related to the rpool). But regarding the question. Like...