Recent content by rungekutta

  1. R

    SSD wear

    Yes. But above still applies - pointless to apply twice.
  2. R

    When proxmox is managed by ups...

    I can’t remember the details and don’t have it in front of me but I thought Nut could be configured to continue with shutdown of the UPS (and therefore cut power to the server) once beyond a point of “no return” and even if power then comes back again? In order to avoid exactly this problem...
  3. R

    TrueNAS Core (ZFS) as Backing Storage for Proxmox: How to Set Up SSD Pool for Best Performance?

    It's possible to do shared storage over iSCSI - i.e. multiple Promox nodes on top of one big iSCSI LUN on your NAS - but it requires faffing around with LVM on top of iSCSI and you lose snapshots and other features. NFS is much more straightforward, I would recommend this instead. Do you have...
  4. R

    TrueNAS Core (ZFS) as Backing Storage for Proxmox: How to Set Up SSD Pool for Best Performance?

    Agreed, iSCSI provides the best results (vs NFS) in my own experience and I think that’s also common wisdom here and on other forums. Probably for a multitude of reasons, including that NFS invariably leads to sync writes which then requires a very fast slog to approach async performance...
  5. R

    SSD wear

    As mentioned already, as awesome as ZFS is, those features come at a cost, including write amplification and hardware requirements if you’re going to make it fly. And some of those features, for example checksumming and compression, are even pointless if applied twice on top of each other, while...
  6. R

    SSD wear

    PS. I used to but don't use OpnSense any more - but check if you're logging stats with netflow (heavy writes), you can also configure OpnSense to keep /var in RAM which of course means log files won't survive reboot, but should reduce writes significantly, particularly if something is...
  7. R

    SSD wear

    Ok good that you found the smoking gun. Just need to keep narrowing it.
  8. R

    SSD wear

    You want ashift 12 (4096 byte sectors): root@pve01:~# zdb -C | grep ashift ashift: 12 Recordsize: root@pve01:~# zfs get recordsize rpool NAME PROPERTY VALUE SOURCE rpool recordsize 128K default Volblocksize: Proxmox GUI: Datacenter -> Storage -> [your pool] -> Block...
  9. R

    SSD wear

    I can add another data point. Am running a 2 node cluster and with all the above-mentioned services enabled, with default settings. ~7TB written in approx 2.5 years 24/7. That’s ZFS on single disk (enterprise SSD). All VMs on separate storage, so provides some clues therefore on roughly what to...
  10. R

    SSD wear

    Interesting, and doing some simple maths - 774TB in 16 months becomes approx 1.6TB per day, nearly 70GB per hour, roughly 1GB per minute, nearly 20MB/s… IF it is evenly distributed like that, you should be able to run some diagnostics and try to figure out where the write pressure comes from...
  11. R

    Tips for Setting Up a Home Server with Proxmox - Guidance on Motherboard, Processor, and SATA Connectivity

    LSI 9300-8i (based on SAS3008) are popular in the TrueNAS community. You could buy new or cheap second hand on eBay. Make sure it has the most recent "IT" firmware, tools are available on the Broadcom website.
  12. R

    Tips for Setting Up a Home Server with Proxmox - Guidance on Motherboard, Processor, and SATA Connectivity

    My recommendation would be to start with the motherboard, and look for a server board if possible. SuperMicro and ASRock Rack are both highly reputable brands, Gigabyte does server boards too but I have no experience of them myself. Server grade motherboards are designed to run 24/7, typically...
  13. R

    Hardware recommendations for a Proxmox server

    ASRock Rack X570D4U is a server motherboard with remote IPMI management and ECC support, but takes a regular Ryzen AM4 CPU. Good compromise. They’ve also more recently released B650D4U for latest gen AM5 Ryzen CPUs. Edit: to add, if you go down that route, check recommended RAM on ASRock Racks...
  14. R

    Managing ipv6 prefix on Proxmox cluster

    Thanks for your suggestions. Any recommendation where I would start looking in order to get started with such a setup? Haven’t used Ansible or Puppet before.
  15. R

    Managing ipv6 prefix on Proxmox cluster

    Hi @BobhWasatch and thanks for your answer! You misunderstand my question though. Maybe I wasn't clear. I have no problems at the gateway side of things. All devices on my network have a link-local address (generated by the device itself) in the fe80::/10 block, a ULA (unique local address) in...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!