Search results

  1. R

    SSD wear

    You want ashift 12 (4096 byte sectors): root@pve01:~# zdb -C | grep ashift ashift: 12 Recordsize: root@pve01:~# zfs get recordsize rpool NAME PROPERTY VALUE SOURCE rpool recordsize 128K default Volblocksize: Proxmox GUI: Datacenter -> Storage -> [your pool] -> Block...
  2. R

    SSD wear

    I can add another data point. Am running a 2 node cluster and with all the above-mentioned services enabled, with default settings. ~7TB written in approx 2.5 years 24/7. That’s ZFS on single disk (enterprise SSD). All VMs on separate storage, so provides some clues therefore on roughly what to...
  3. R

    SSD wear

    Interesting, and doing some simple maths - 774TB in 16 months becomes approx 1.6TB per day, nearly 70GB per hour, roughly 1GB per minute, nearly 20MB/s… IF it is evenly distributed like that, you should be able to run some diagnostics and try to figure out where the write pressure comes from...
  4. R

    Tips for Setting Up a Home Server with Proxmox - Guidance on Motherboard, Processor, and SATA Connectivity

    LSI 9300-8i (based on SAS3008) are popular in the TrueNAS community. You could buy new or cheap second hand on eBay. Make sure it has the most recent "IT" firmware, tools are available on the Broadcom website.
  5. R

    Tips for Setting Up a Home Server with Proxmox - Guidance on Motherboard, Processor, and SATA Connectivity

    My recommendation would be to start with the motherboard, and look for a server board if possible. SuperMicro and ASRock Rack are both highly reputable brands, Gigabyte does server boards too but I have no experience of them myself. Server grade motherboards are designed to run 24/7, typically...
  6. R

    Hardware recommendations for a Proxmox server

    ASRock Rack X570D4U is a server motherboard with remote IPMI management and ECC support, but takes a regular Ryzen AM4 CPU. Good compromise. They’ve also more recently released B650D4U for latest gen AM5 Ryzen CPUs. Edit: to add, if you go down that route, check recommended RAM on ASRock Racks...
  7. R

    Managing ipv6 prefix on Proxmox cluster

    Thanks for your suggestions. Any recommendation where I would start looking in order to get started with such a setup? Haven’t used Ansible or Puppet before.
  8. R

    Managing ipv6 prefix on Proxmox cluster

    Hi @BobhWasatch and thanks for your answer! You misunderstand my question though. Maybe I wasn't clear. I have no problems at the gateway side of things. All devices on my network have a link-local address (generated by the device itself) in the fe80::/10 block, a ULA (unique local address) in...
  9. R

    Managing ipv6 prefix on Proxmox cluster

    PS. note to mods - apologies if posted in the wrong forum, feel free to move to networking if that’s where it belongs..!
  10. R

    Managing ipv6 prefix on Proxmox cluster

    Hi, I'm running Proxmox 8.0.4 on a two node cluster plus a qdevice (RPi). Works well. My ISP recently enabled ipv6 with a dynamic /56 prefix which also works well. The wan gw acquires the prefix from ISP using WIDE-DHCPv6, and uses dnsmasq as DHCPv4 server as well as SLAAC for ipv6. Dual-stack...
  11. R

    ZFS slow writes on Samsung PM893

    Interesting. I’m speculating here but thinking it could boil down to the classic scenarios of latency vs throughput. If your write operations are sync and therefore inherently dependant in the SSD completing the write, this always comes with a certain overhead irrespective of the amount of data...
  12. R

    NFS share to VM

    1) how does your VM connect to the network? Generally Truenas doesn’t know or care whether the client is a VM in a hypervisor or something running on bare metal. Check permission in truenas and ensure you’ve installed all the NFS client libraries in your VM. 2) connect direct from the VM i.e...
  13. R

    Looking for advise: zfs should belong to Proxmox or Truenas?

    Minor correction. I don't think Emby will re-encode the file if the client is able to support it in its original format. In that case the file will just be streamed as-is. In other cases, the server will re-encode the file into something that the client can support (and this can be a killer...
  14. R

    Looking for advise: zfs should belong to Proxmox or Truenas?

    If you are going to use TrueNAS, it wants low-level access to the disks so you should pass them through, or actually ideally the whole controller. Some people do that by running the disks on a PCIe HBA which is then passed through to TrueNAS running in a VM. You may be able to pass through the...
  15. R

    Generic questions

    The 5900x has 12 physical cores with hyper threading which means it has 24 logical cores. It’s a way to squeeze more efficiency out of the the chip but it won’t be as efficient as if it had 24 real physical cores, but rather somewhere in between. You can assign as many cores in VMs as you’d...
  16. R

    HomeLab Suggestions!!

    How about keeping the two machines you have but set them up in a cluster as spread the VM load between them?
  17. R

    From single-node install to cluster?

    PS I won’t be using Ceph - will continue to use the same NAS and NFS even though it’s single point of failure.
  18. R

    From single-node install to cluster?

    Ah, that’s great (and logical). So just to be very clear - I can keep all my current config including VMs/LXCs when I turn my current install into the first cluster node (by creating the cluster) - only subsequent nodes that join the cluster will get their config overwritten?
  19. R

    From single-node install to cluster?

    Hi, first time poster, and new Proxmox user. So I just migrated from ESXi and in the process created/moved 14 VMs and LXCs while at the same time moving to new hardware (Ryzen 5950x / 128GB RAM). My data store remains as before on a separate TrueNAS (bare metal), accessed by Proxmox via NFS...