Recent content by rungekutta

  1. R

    ZFS Storage for VMs and Proxmox vs TrueNAS Management

    Pretty cool! But given TrueNAS history of disregard for forward and backward compatibility, breaking changes and lack of consistency across versions, I wonder if this is more trouble than it’s worth? Given that it would likely stop working again soon or at best a heavy maintenance burden to keep...
  2. R

    ZFS Storage for VMs and Proxmox vs TrueNAS Management

    Ah. Well. A 2 node cluster can work fine too, add a raspberry pi qdev for cluster quorum even with 1 node down. Ceph is obviously not an option then, so either VM storage on NAS with single point of failure or local storage with replication across nodes.
  3. R

    ZFS Storage for VMs and Proxmox vs TrueNAS Management

    Yes but less fun... ;-) Also makes Proxmox/os updates scarier, and more vulnerable to hardware problems with all eggs in one basket.
  4. R

    ZFS Storage for VMs and Proxmox vs TrueNAS Management

    Ceph is amazing but it needs at least 3 preferably 4 nodes and 10 or 25Gb networking. One typical homelab pattern would be to run all your nodes in a Proxmox/Ceph cluster and let that also be your VM storage, then layer one or several NAS VMs on top with its own attached additional storage to...
  5. R

    ZFS Storage for VMs and Proxmox vs TrueNAS Management

    As you say, you could do both. Flash and ZFS on Proxmox for ultra-fast and resilient local VM storage. Then a NAS VM which provides slow(er) storage over the network, this could run TrueNAS if you like. Preferably with its own passed-through hardware (SAS/SATA controller or NVMe). And then look...
  6. R

    ZFS Storage for VMs and Proxmox vs TrueNAS Management

    Right, Proxmox itself doesn't provide a way to export iSCSI over the network. You could set it up relatively easily yourself, as the underlying Debian has built-in support for this (via targetcli), but you'll be bastardising your Promox install somewhat which is not to everyone's taste. TrueNAS...
  7. R

    ZFS Storage for VMs and Proxmox vs TrueNAS Management

    TrueNAS makes its ZFS storage available for VM storage via network protocols such as NFS, iSCSI etc whether virtualised or not. If TrueNAS runs in a VM with paravirtualized network devices, it actually uses the network stack of the host, and will communicate with other VMs on the same host using...
  8. R

    SR-IOV success stories?

    Thanks. Following on to my first note, I got a little further. According to Intel’s own documentation, MTU must be the same on physical and all virtual functions, otherwise leads to undefined behaviour (which is what I saw). Previously I have run MTU 9000 on all physical ports and then either...
  9. R

    SR-IOV success stories?

    Hi, very happy Proxmox user here. However... what are the real success stories with SR-IOV networking which could maybe be shared here? I recently attempted this, based on the steps in the documentation complemented by other more detail guides online (as to how to set VLANs on VFs etc). However...
  10. R

    Removing/Adding Clusternode

    What I’m saying is, use the normal instructions for removing then adding a node and you should be fine, irrespective of whether the new node happens to have the same name, same ip, or both as the removed one. Remove the dead node first though before you add the new reincarnated one.
  11. R

    Migrate ZFS to Ceph?

    Yes. Unless you have enough hardware to run both Ceph and ZFS in parallel and migrate from one to the other.
  12. R

    Removing/Adding Clusternode

    I did a similar thing except my new node had the same name and IP as the one I had just removed. Worked as expected and without problems. I followed the removal guidelines very carefully.
  13. R

    Migrate ZFS to Ceph?

    Your goal would be to reuse the same hardware including disks, migrate from ZFS to Ceph, live on a production setup without disturbing anything? *Maybe* theoretically possible with some serious planning (and risks) but my recommendation would be to first read up on Ceph, understand how you would...
  14. R

    SSD wear

    Yes. But above still applies - pointless to apply twice.
  15. R

    When proxmox is managed by ups...

    I can’t remember the details and don’t have it in front of me but I thought Nut could be configured to continue with shutdown of the UPS (and therefore cut power to the server) once beyond a point of “no return” and even if power then comes back again? In order to avoid exactly this problem...