Search results for query: consumer grade ssd

  1. T

    [SOLVED] Tips on Upgrading SSDs for my Proxmox lab

    If you are going to make this statement please make it plain that this only applies if you are using the ZFS system. Consumer ssd are fine and fast enough for multiple vm's in a small setup if you use ext4.
  2. W

    [SOLVED] Tips on Upgrading SSDs for my Proxmox lab

    Cheers, I will look into possible options. I see someone selling a brand new INTEL D3 S4610, 1.92TB for 180$. Seem too good to be true I suppose? So Samsung Evo 970 2TB plus is rated for 1200TBW. Isn't that plenty for home lab? I also just found a new INTEL D3 S4610, 1.92TB for 180$, but it...
  3. S

    Greenhorn Storage Setup Question

    New to proxmox and would appreciate advise on setting up storage. System is for homelab, non-production. Setup initial filesystem as zfs raid1 consisting of 128 GB portions of 2 x SN850X 2TB SSD This will only be used for Proxmox and other bare metal assets. Goal is to keep it tight to...
  4. D

    Question about file system

    Thanks - I’ll look into Zram. I actually have two drives in the PC. I wanted to setup as ZFS RAID1 but since they’re are different sizes, PVE wouldn’t let me setup as RAID1. I know other systems (NAS for example) will mirror but use the smaller of the two drives for overall size...
  5. wahmed

    Question about file system

    ...by going from ZFS to ext4. Other than that, ZFS is not going to cause any more wear and tear than ext4. The faster wear of consumer grade SSDs is certainly a valid concern. But that can be easily mitigated. 1. It is the write that causes the most wear. Proxmox or the OS in general writes log...
  6. D

    Question about file system

    ...At the time, I thought that was a good idea. After doing some more research over time, I'm reading a lot about the wear/tear on consumer grade SSD's, which is exactly what I have. I am concerned about the risk of failure of my current SSD and wondering if I should consider moving back to...
  7. wahmed

    cluster performance degradation

    ...as journal drive you can mitigate that issue easily. Yes you can enterprise grade SSD. But, there is nothing wrong with using good consumer grade SSDs to add performance. Specially if you add 2 SSDs in mirror to hold the DB/WAL, you can add performance without breaking bank. Lexar NS100...
  8. UdoB

    Service "pmxcfs" soll durch viele Schreibvorgänge die SSD kaputt machen

    Diese Problematik betrifft allerdings nur billige "Consumer Grade"-SSDs, die sowieso nicht für ernsthaften Einsatz in Frage kommen. Empfohlen sind ganz ausdrücklich "Enterprise Class"-SSDs mit "PLP", Power-Loss-Protection. Und davon dann zwei Stück im Mirror :)
  9. L

    SSD Trim maintenance best practice for ext4/LVM and ZFS disks

    This is a home lab, nothing production critical, I have a small 3 node cluster, I am using on each node two consumer grade SSDs, a Boot/OS disk with ext4/LVM and a VM/CT disk using ZFS, because I use Replication/HA from some critical VMs/CTs. Planning like a weekly SSD/Trim maintenance cron...
  10. R

    Please help me solve high IO delay

    Hi, i search for the technology used in these drives, its not that clear but it seems to be qlc (http://www.madshrimps.be/articles/article/1001277/Patriot-P210-2TB-2.5-SSD-Review/2#axzz8ry0ZAeNQ) WITHOUT any ram. If you look at the performance test of this site...
  11. W

    Please help me solve high IO delay

    download = writing AND THEN copy it elsewhere to remote = reading is zfs arc problem on consumer-grade ssd's or writing to remote target ...
  12. D

    Please help me solve high IO delay

    Hello everyone, I’ve been facing a persistent issue for some time now, and I would greatly appreciate advice from more experienced members on how to resolve it. I’ve tried Googling and experimenting with different solutions, but so far, nothing has worked. The problem is the high I/O delay I...
  13. M

    Ceph and TBW with consumer grade SSD

    I have a 3-node proxmox/ceph cluster with consumer grade NVME SSD and it works fine and I use a dozen or so of different VMs. Just checked at my 1TB Intel 660p and Crucial P1 that I started to use in 2019, one of them has 108 TB written, the other 126TB. Basically that is less that 2/3 of their...
  14. L

    Ceph and TBW with consumer grade SSD

    Looking for feedback from experienced Proxmox/Ceph users, I have a 3 node cluster, each node has a dedicated SSD (Samsung 870 EVO 500GB SATA) for Ceph, there is a separate SSD for Proxmox Boot/OS, the Ceph SSD is rated for 300 TBW. This is a brand new home-lab type of cluster, at this point, I...
  15. I

    Backup and Storage advice

    I would advise against using ZFS on consumer grade ssd's as it will destroy your ssd's in no time ! Proxmox writes an insane amount of logging data to the disk (10 to 15 Gb p/day), which apparently can't be reduced and kills your ssd in no time... Search for threads about ssd TBW data. Goodluck!
  16. O

    SN850X + rsync

    ...and the data/images on a 4TB WD SN850X. The configuration and the images/snapshots are saved each day on a separate NAS (using a RAID) via rsync. The WD SN850X is a consumer grade SSD with 2GB DRAM and 2400 TBW. Do you know some configuration options that extend the lifetime of the SSD? ty...
  17. J

    SMART/Health failure on Ceph install

    ...three 1TB Samsung SSD Pro NVME drives were in degraded mode because of percentage used exceeding 100%. By now I know that running consumer grade SSD's is not advised for Ceph and that livespan can be effected negatively. However, I would expect wearout to be fairly equally balanced over the...
  18. mattlach

    Replace Mirrored ZFS Boot Pool (rpool) with Smaller Devices

    Now I know what you are all thinking. IT CAN'T BE DONE. ZFS only allows growing pools, not shrinking them. But bear with me here. Background: I boot this server off of two mirrored NVMe drives. I got a fantastic deal on a set of 500GB Samsung 980 Pro drives when I was rebuilding this...
  19. E

    RAID5 with LVM

    This typically holds true for the old pieces with no DRAM (no HMB) and very slow NAND. It would work just fine on Gen4 NVMe with high endurance. ZFS will have wose performance than other filesystems because of the "features" on any storage. It's a vacuum cleaner.
  20. M

    RAID5 with LVM

    ZFS isn't recommended with consumer grade SSD. I experimented, and i've got high IO delay with them, as described on the documentation. It's for an at home setup. The Dell R6x0 doesn't have a good WAF (Wife Acceptance Factor), it's costly, large and loud...