Search results for query: consumer grade ssd

  1. E

    RAID5 with LVM

    They literally go out of their way not to support it, ever: https://forum.proxmox.com/threads/mdraid-o_direct.156036/#post-713584 There literally no sensible selection for 2280 "enterprise" SSDs anyways to use in some consumer hardware.
  2. LnxBil

    RAID5 with LVM

    There are also SATA enterprise SSDs and they are not as costly as the SAS counterpart. There is a software raid, it is called ZFS ;)
  3. M

    RAID5 with LVM

    Yes, i've saw this. I need to install Debian, and after Proxmox on top of it. It's for home use : Yunohost (blog, cloud), OPNsense/pfSense, HomeAssistant. I think it could be an option to add on the installer : Software RAID (with disclaimer). At home, it's reasonable to experiment, but...
  4. LnxBil

    RAID5 with LVM

    Also note, that you cannot use software raid with mdadm in the PVE installer. You need to use the Debian installert and install PVE on top if you want to use it for the OS. Sure, but we're talking about tree-vs-two, not anything-else-vs-one. Consumer SSDs in any enterprise environment are slow...
  5. M

    RAID5 with LVM

    mdadm, noted. Software raid with three consumer grade SSD, isn't that better than only one consumer grade SSD ?
  6. E

    Help with ZFS Mirror

    I actually advise people mostly to NOT use ZFS for most (home) use cases. It is cretainly true you can probably get better bang for the buck than you did for the P3, but the requirements Proxmox official docs would give you on SSDs is absurd for home users, e.g. PLP with high TBW. In practical...
  7. Z

    Datastore synced with Rclone broken?

    Oh yea I failed to mention I ran it afterwards without that flag and everything was marked as corrupt. It's definitely been synced successfully, at least according to RClone. I wouldn't have been surprised by corrupted chunks, since synced data is stored on a single consumer-grade SSD (albeit...
  8. Z

    Test Results for Building VM on Target Storage NFS, ISCSI, SMB, LOCAL SSD, SLOG and NO SLOG.

    **Installing Ubuntu Server 22.04 VM: Performance Test Results** --- ### **Proxmox Host Specifications** - **Proxmox Host**: Dell R220 - **Memory**: 32GB DDR3 RAM - **Proxmox Storage**: 480GB consumer SSD (LVM-thin, boot, and LVM-thin all on the same drive) - **Network**: 10G NIC with LC fiber...
  9. G

    Why Does ZFS Hate my Server

    ...if you care about the data, if you’re using SSD, this is an older server and the Dell SSD in that generation were not very fast, consumer grade SSD are also a hit and a miss. Again, EXT4 may not be syncing your data to disk right away and you’re really testing the throughput of the SATA bus...
  10. A

    RAID5 with LVM

    yes. not quite. you need to use mdadm to make the underlying raid. dont use a single parity volumeset, ESPECIALLY with consumer grade drives. you're better off making a single mirror, and use the third drive for other purposes.
  11. M

    RAID5 with LVM

    ...some point on the configuration process "lvcreate -n grappe1 –type raid5 -L 10G -i 4 vg1". ZFS can do RAID5-type, but I'm using consumer grade SSD, and this is not recommended (first performance issue, with big "IO delay"). The PC is an HP 400 G1, with i7 4790, 16Go of RAM, and 3 250GB SSD...
  12. A

    RAID 1 and CEPH

    Remote hands are quite adept in following instructions, but that can only be as successful as your hardware is in being identifiable. Most enterprise level servers have ways for you to identify and turn on id lights. Unless you're using a PC with onboard sata, your sas hba has means to blink...
  13. S

    Install Ceph on dell PowerEdge 720 with perc

    For those considering to use consumer-grade disks hoping H730 Perc RAID buffer will save them: Don't. Here is our story: We built our ceph cluster on top of about 40 QVO 860 drives (we had them already and were not aware about PLP feature). Initially we went with JBOD and got awful performance...
  14. S

    Storage Configuration Advice

    Hello! I am planning on moving my existing TrueNAS Scale server to Proxmox (possibly virtualizing TrueNAS via passing in HBA, haven't decided) to get a better experience virtualizing workloads. My needs are currently pretty classic homelab stuff; Plex, *arr stack, Home Assistant, Frigate...
  15. L

    SSD Drives

    FWIW, I have been running three proxmox nodes for over a year, on consumer grade SSDs (Team group mp 33 and mp 44 drives). I don't run a cluster, and the drives are mirrored, using ZFS and running VMs. I have the corosync, pve-ha-crm, and pve-ha-lrm services disabled. My drive wear out on two...
  16. E

    SSD wearout at 99%

    I believe it's a a WD RED for NAS [1] with 600 TBW. That PVE handles flushing its "virtual cluster filesystem" terribly when it comes to flushing it onto SSD has nothing to do with being a "hypervisor". If PVE was not "desktop class" grade, it would be using something else than startmontools in...
  17. G

    Proxmox VE 8.2.2 - High IO delay

    indeed storing vDisks on ZFS require real datacenter flash disks because ZFS have massive write amplification and because ZFS is slow on consumer flash drives as they can't guarantee writes fast. it's not few posts about the facts, it's daily posts to reminder the facts. burned/failed/replaced...
  18. F

    Hetzner installimage proxmox + 4 diff drives

    Hi, yes i ment subscription, the community 1cpu, to support proxmox with little pocket change so to speak the 2x512GB is nvme/ssd, the 8T are hdd/mechanicals (Hetzner EX62 server) I also got 2 different IPv4 that i can use in some good way
  19. W

    Ceph: Unusable random read/write performance

    Currently running a 3 node cluster with Ceph, suffering from unusably bad random read/write performance. Each node is running 2x 22 core Xeon chips with 256GB RAM. Ceph network is 10G, MTU 9000 - I have verified this is being used correctly. Currently running 2 SATA SSD OSDs per node...
  20. Maximiliano

    new server build; hardware compability

    This is actually one setup that I would recommend, two SSDs mirrored with ZFS. Enterprise disks have better performance and lower chances of failing. Eventually, they will stop working like any storage solution, the main question is when, and hence we recommend redundancy on multiple levels. I...