Search results for query: wearout consumer ssd disk

  1. J

    Disk constellation for DXP4800

    I agree, especially since TrueNAS not only supports the Apps from the app store but also allows setting up containers with docker-compose files. Most applications in the self-hosted sphere already provide ones for copy/paste. Or like you said: Using directly Debian or another Linux distribution...
  2. J

    Disk constellation for DXP4800

    Is the internal flash a sd (mmc) card? Then you won't be able to install ProxmoxVE with the regular iso image see https://forum.proxmox.com/threads/unable-to-get-device-for-partition-1-on-device-dev-mmcblk0.42348/ In theory you could first install Debian stable and afterwards ProxmoxVE ( see...
  3. P

    New installation, hardware recommendation

    Thanks a ton for such ammount of information. I'll go into it and post what I find. My interest in ZFS comes from threads like the ones I've read around here, but it is true that I have to find how to make it work.
  4. J

    New installation, hardware recommendation

    I would rethink this part. Different to older ESX versions or pfsense/opensense ProxmoxVE is NOT designed to be run from a flash drive in a kind of read-only mode. The operating system writes a lot to the operating system drive (logging data, configuration database of proxmox cluster file system...
  5. J

    LVM vs ZFS

    Ja, alleine wegen so Sachen wie Snapshots. Die sind mit LVM zwar auch möglich, kosten dort aber Performance, wen man die zu lange aufbewahrt. Plus die von Udo in seinen verlinkten Beitrag genannten Sachen ( Kompression, Skalierbarkeit etc)...
  6. J

    Constant writing on my proxmox main drive

    The logfiles will be rotated (meaning that old files will be removed) after some time, so your disk won't run out of space normally. This doesn't help with the SSD wearout though, for this you can go with these options: Get used enterprise-ssds with powerloss protection Rotating HDDs shouldn't...
  7. G

    ZFS pool with mixed speed & size vdevs

    what is the point ? install PVE on HDD because no speed is required for PVE itself, rpool will be the slow zpool. second zpool for whole ssd. balancing data between the two disks will be done with moving vDisks between the two Storages. (don't forget ZFS eat / quickly wearout consumer grade SSD...)
  8. M

    Cluster node failed for now reason, VMs got moved, but got no IPv4. How to debug?

    Would love to, but there's only one NIC on the mainboard. It's a homelab setup using consumer hardware. There is also nothing indicating the cause of the outage, journalctl on both nodes, ernie & bert look .. okay: ernie (the remaining one): The failed connect on bert after reboot was...
  9. S

    Proxmox with Second Hand Enterprise SSDs

    I had many problems related to VERY HIGH iowait on my Proxmox VE Systems using Crucial MX500, which is a Consumer SSD (and one with very little DRAM, so once that fills up, Performance drops like a Rock). Now I got 2 x Intel DC S3610 1.6 TB SSDs which should be very good for VM Storage or also...
  10. J

    How to configure homelab storage

    Please note, that there are several reports here, that ZFS RAIDZ isn't good at providing performance. For performance and ZFS it's best to setup the devices as mirrors. For improving performance on HDDs it might be a good idea, to setup two SSD partitions as a special device to improve the...
  11. Max Carrara

    [SOLVED] Performance comparison between ZFS and LVM

    That's an interesting way to test it, but keep the following things in mind: If there are any other IOPS going on on your system, they might affect how both caches behave. The best way to ensure that the ARC is definitely cleared is to export your pool first and then unload the ZFS kernel...
  12. E

    Moving from version 7 to 8 - what should I do differently?

    This is - essentially - an intentional anti-feature, but it's much worse in a cluster scenario: https://forum.proxmox.com/threads/proxmox-and-ssds.153914/#post-700255 ZFS is a filesystem that was never designed for SSDs, any copy-on-write filesystem will do poorly. I would use XFS on mdadm...
  13. leesteken

    Moving from version 7 to 8 - what should I do differently?

    Proxmox can wear consumer SSDs quickly (also depending on your VMs) but 1% per month will still last you eight years. It tends to increases more at the beginning because it's empty and you are writing new VMs to it. Give it some time before drawing conclusions. Yes, it has high write...
  14. J

    Moving from version 7 to 8 - what should I do differently?

    interesting side note - I have a new proxmox box that I configured with ZFS 1 mirrors using new SSD's - more consumer class Samsung 870 EVO and, after maybe 1 month of use SMART reports 1% wearout. That seems soon for any wearout. I've read that ZFS writes a lot to disk - would use of Debian...
  15. W

    Migrating to a smaller SSD

    Hello, I currently have Proxmox running my home servers but have been doing a bit of storage maintenance. I upgraded my VM (zfs mirrored pool) storage to 2 x consumer SSDs last year due to some crazy IO delays I was facing at the time and for a while it worked well enough but recently they...
  16. G

    Bad disk I/O performance with LSI SAS2008 controller

    because they use the HW cache (that's why disk's cache is disabled) + many spinning disks and/or SSD provided/recommended by the vendor which are datacenter SSD drives. Consumer SSD drives like your Samsung 860 aren't designed for any HW RAID, even if its cache is enabled, missing the TRIM...
  17. E

    Consumer grade SSD's

    You might want to have a look at this thread here: https://forum.proxmox.com/threads/2-node-cluster-with-the-the-least-amount-of-clusterization-how.140434/#post-628788 Or just use cache vdev. If you have enough RAM on that system. See above. Have a look at 2024 consumer NVMes, find a...
  18. K

    High data unites written / SSD wearout in Proxmox

    Hi everyone, Happy new year :) I have begun to see a disturbing trend in both my Proxmox VE nodes, that the M2 disks are wearing out rather fast. Both nodes are identitical in the terms of hardware and configuration. 6.2.16-12-pve 2 x Samsung SSD 980 Pro 2TB (Only one in use on each node for...
  19. Daniel-Doggy

    [SOLVED] Advice on configuration for Proxmox VE

    I wouln't say exorbitant but given that it is normal for Proxmox VE to write 30GB/day (as I have read on the forum and also being the reason why consumer grade disks are not recommended) when idle and possibly having just one SSD for Proxmox with no redundancy it is best to minimize the wear on...
  20. E

    Homelab Konzeptfragen

    Warum nicht? Bei mir bootet keiner der PVE Rechner von einer NVME. Alle "nur" eine SATA-SSD (und default was PVE so einrichtet bei der installation). Nur mein alter HP DL 360 hat gespiegeltes HW Mirror aus zwei SAS 15k platten als Boot (die waren halt so bei der gebrauchten Kiste dabei). Auf dem...