Search results

  1. A

    My RDS-2025 VM running extremely slow.

    that should be your first option ;) PVE8 is not EOL yet, and even when it is (this august) you can keep running for some time after. might be the better option, especially if you dont need anything pve9 specific. This will give you time to figure out the issue on a pve9 testbed before...
  2. A

    Proxmox Cluster Random Reboot

    so here's what I'd suggest- dont use 10.13.30.x for corosync at all. assign arbitrary addresses to bond0 and bond1; --edit- ON DIFFERENT SUBNETS. ideally, they should be on seperate vlans too. use those addresses as ring0 and ring1.
  3. A

    [SOLVED] Unable to update Proxmox: "Network is unrechable" despite the web gui still being accessible

    @UdoB correct. I started typing add blah blah blah but the changed it to an echo. hilarity ensued.
  4. A

    My RDS-2025 VM running extremely slow.

    This might help: https://forum.proxmox.com/threads/yawgpp-yet-another-windows-guest-performance-post.181030/
  5. A

    [SOLVED] Unable to update Proxmox: "Network is unrechable" despite the web gui still being accessible

    sudo apt-get -o Acquire::ForceIPv4=true update to make permanent, echo "add Acquire::ForceIPv4 true" >> /etc/apt/apt.conf.d/99force-ipv4
  6. A

    Watchdog Reboots

    then you need to use QOS and/or limit the bw of your backups. this will happen again.
  7. A

    Watchdog Reboots

    are you sharing interfaces between PBS and corosync? if so, break them up.
  8. A

    Watchdog Reboots

    assuming the logs provided are up to the time of death, this is the likely cause. are you backing up to an NFS target?
  9. A

    Proxmox Cluster Random Reboot

    post the content of your /etc/network/interfaces files. note which interface is being used for what purpose.
  10. A

    HA configuration for node migration/restart only

    hmm. this is where a feature request for "last state" is probably your next step :)
  11. A

    HA configuration for node migration/restart only

    yes. For your test vms (and other non critical) just set the HA request state to ignored. you can still live migrate them just fine.
  12. A

    HA configuration for node migration/restart only

    You realize that HA request state of started means, you know, that it should be started. If you dont want that, set a different request state.
  13. A

    Storage for small clusters, any good solutions?

    This implies that the operator has both the skill, experience, and the wherewithal (not having a dozen other responsibilities) to understand the docs and apply them. Ceph is not attractive at the low end precisely because it requires engineer level admin, which is neither common or cost...
  14. A

    Best RAID for ZFS in Small Cluster?

    If by redundancy you mean disk fault tolerance, the higher the value after "raidz" the higher the fault tolerance. In practice, raidz2+ (never use single parity raidz unless prepared to lose the pool at any time) performance= striped mirrors. full stop. if you wish to sacrifice some performance...
  15. A

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    Good plan. I hope you understand that this experiment will not ever be worth any money you put into it. You already have a working solution- and if the "politics" of the money prevent you from putting together a sane configuration you're just throwing money away for no good reason.
  16. A

    Snapshot causes VM to become unresponsive.

    I'm sure there would be readers grateful for any such information. any reason you're not just posting it?
  17. A

    Request: SAS HBA LUN Sharing Between Proxmox Cluster Hosts (Like VMware)

    @RodolfoRibeiro If you want more direct assistance, post the content of (from both hosts) lsblk multipath -ll -v2 if you have system logs available from the point in time when your vm became corrupted, would be good to look at what happened.
  18. A

    Request: SAS HBA LUN Sharing Between Proxmox Cluster Hosts (Like VMware)

    For the generations of hardware where iscsi and SAS were offered as available SKUs there was no meaningful performance difference- 16G FC simply had more headroom to fill cache. When 25GB iscsi product started shipping, THOSE were faster (even for fc16.) it is theoretically be SASg4 host...
  19. A

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    K=6,M=2 results in 6 data strips per 8 total. 6/8=0.75 in replication you have 1 data strips per 3 total. 1/3=0.33 its not exactly the "same" availability because survivability in a replication group is much higher; you need one living osd per pg to recover, whereas with an EC 6+2 you need 6...
  20. A

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    "lower" and "higher" are subjective. Ceph achieves HA using raw capacity. suit yourself. this is not a recommended deployment. You are far better served by just having two SEPERATE VMs each serving all those functions without any ceph at all- you'll have better fault tolerance, better...