Search results

  1. A

    containers too slow

    check dmesg on the host for OOM messages.
  2. A

    Bind9 Installation fails using Proxmox 9.0

    there may be an issue with bind and trixie, BUT you really dont want to run bind on your hypervisor anyway. put in on a vm or container.
  3. A

    vm offline migration from cluster to cluster using Netapp Storage

    this is due to the stats collector being in a hung state. make sure there are no vm's still referencing the missing datastore; if the question marks are still there: 1. check pvesm status. there should be no unknown datastores 2. systemctl restart pvestatd 3. systemctl restart pveproxy (may not...
  4. A

    Recommended amount of swap?

    the PVE install defaults are designed for "homelab." you can and should ignore them if you're using for production. you can change this at any time. and should. No. You have 256GB of ram. swap can only slow down your VMs if its ever actually used- which it probably isnt. it depends on how your...
  5. A

    CEPH Erasure Coded Configuration: Review/Confirmation

    The placement logic is per node, not per drive. the only sane EC config possible with 3 nodes is 2+1. But bear in mind that while you CAN do this, its not really supportable; on node down you're operating without checksum at all, and under normal circumstances the pool would go read only in that...
  6. A

    Please help with Proxmox VE 9 Cluster and Alletra B10000 Via iSCSI

    AFAIK Multipathing is the preferred method to address iscsi over bonded interfaces as it is completely agnostic to the network configuration and features- but bonding can work as well. if you DO want to use bonding make sure both ends (host and target) are bonded in an equivalent fashion. BTW...
  7. A

    PVE Replication with ext4

    I think this is the part of the conversation where we investigate the WHY. If your intention is to live migrate without shared storage, any CoW backed storage (zfs, qcow, even lvm with snapshot) would work and doesnt require any additional steps (cloning manually.) If the reason is something...
  8. A

    [Proxmox Cluster with Ceph Full-Mesh Network Design] Sanity Check & Advice for 3-node cluster with separate 10GbE/25GbE networks

    you understand that you cant EXCEED the speed of your public interface to a single host...
  9. A

    Deleted

    in that case, the only difference is in load calculation. Since Zen4 and Rocket Lake have similar IPC performance, its a simple matter of: Relative Performance, Xeon E-2388G (3.2GHz base, 8 core)= 25.6 Relative Performance, AMD EPYC 4244P (3.8GHz base, 6 cores)=22.8 On paper, sure looks like...
  10. A

    [Proxmox Cluster with Ceph Full-Mesh Network Design] Sanity Check & Advice for 3-node cluster with separate 10GbE/25GbE networks

    It actually cant. keep in mind that ceph uses networking for both host and drive traffic, which means you need to double the bandwidth vs filesystem throughput.
  11. A

    Deleted

    "perform better" doesnt mean anything. do your application require specific feature? do they benefit from scaling up or sideways (do your apps need clock speed?) Next, what are your IO needs? the Xeon part has 40 lanes of gen4 PCIe; the Epyc has 28 lanes of gen5. Next- power considerations-...
  12. A

    PVE Replication with ext4

    Your root file system doesnt really matter for the purposes of this discussion. only the vm storage does. Assuming you intend to use the same filesystem for your OS and payload, you cant use ZFS replication- but that doesnt mean you cant accomplish this anyway- just maybe not the way you think...
  13. A

    [Proxmox Cluster with Ceph Full-Mesh Network Design] Sanity Check & Advice for 3-node cluster with separate 10GbE/25GbE networks

    I've only ever used mesh in a lab scenario, and that was some time ago. I used broadcast bonds and it worked well enough. Im fairly certain that no logical topology will result in any meaningful difference in performance, but as for stability (and to extend your cluster to more then 3 nodes- if...
  14. A

    Proxmox not reaching the Idrac R750 to get to the web GUI

    Generally speaking, your idrac interface (physical or shared) will NOT be visible to the host operating system. Treat it like a separate computer. Now that we got that out of the way, what are you actually trying to do?
  15. A

    Physical Server Migration

    This isnt really a pve question. Also, I would strongly advise taking this opportunity to migrate your mail server to a currently supportable environment- mail is one of the most obvious places for external attack after all. Luckily for you, both can be achieved at the same time. Below is the...
  16. A

    Proxmox user base seems rather thin?

    For you- sure. for me- I dont have this hardware or this problem, so its not useful to me nor am I able to participate in the troubleshooting. Please be sure to post any solution you uncover. that is, as you pointed out, the point and nature of the community :)
  17. A

    Write amplification for OS only drives, zfs vs btrfs

    me? I'm not really aware of any ;) careful with jumping to conclusions. in all seriousness, technology isnt static. a lot of the issues present in earlier/older flash chips and controllers have been mitigated over the years. and wouldnt apply to your stated usecase in the first place.
  18. A

    Proxmox user base seems rather thin?

    You already found the answer. the fact you're moving the goalposts isnt helping you. I'd advise to get rid of your "wants"- the newer kernel is probably providing you with no utility at all. Given that the issues with your NIC are known and easily remedied with a pinned earlier kernel should...
  19. A

    Proxmox user base seems rather thin?

    out of curiosity, what was the vexing question you asked that had no results on the internet?
  20. A

    Write amplification for OS only drives, zfs vs btrfs

    Just to put things in perspective, I have nodes running on consumer level OS ssds for OVER 10 YEARS, and that's without local log prevention. as long as you're not comingling payload and OS even crappy old drives dont get enough writes for it to matter. With current drives (even the cheapest...