Search results

  1. P

    Cluster suggestion

    I think so. My guess is probably will need 2 pseudo nodes where these 2 will act just as corosync nodes and monitor. But I think these 2 nodes will also need 3 separate nics to be able to talk to the other 3 real nodes.
  2. P

    Cluster suggestion

    I see. Thank you for the answer. So adding 2 extra qdevices would not increase the cluster ability to sustain 2 failed nodes.
  3. P

    Cluster suggestion

    That's right, thus my plan to introduce extra 2 qdevice to make it into 5 pseudo nodes.
  4. P

    Cluster suggestion

    Hello Everyone. I am upgrading the hardwares my cluster of 3 nodes. The specs are 3 identical nodes of: Xeon i5 2487w 256GB RAM 500GB NVME x 1 500GB SSD x 2 1TB SSD x 1 2 port NIC10Gbe fiber running DAC cables (1 on VLAN for strictly ceph private network, 1 on separate VLAN for ceph public and...
  5. P

    Windows 11 VM causing slow OSD heartbeats

    Hi All, I have 3 node clusters with ceph. Each node is identical with 1NVMe, 1SSD, and 1HDD. We have 3 Linux server VM and 1 Windows Server VM. I have a weird thing happening is I created a Win11 VM and everytime I started the VM, it's causing the Slow OSD heartbeats on back and front (if I...
  6. P

    Issues with NFS share not reconnecting if I have to reboot the nfs server.

    umount -l -f /mnt/pve/<your share name> from shell on each node I just remounted the drive everytime it failed. wish Proxmox does this automatically
  7. P

    Restore Single Disk from PBS

    This should be in official documentation. I detached the drives that I wanted to keep, restored the whole VM, and reattached the drives back. Then I found this instruction.
  8. P

    [SOLVED] possible bug proxmox 7, active-backup bond with VLAN aware bridge causes no internet

    What I did was just removing the auto eno1 auto eno2 etc, basically remove the auto command for each hardware interface. I did not even add the hwaddress command for the bond. There is a thread on this problem...
  9. P

    Proxmox 7 ceph pacific fresh install, can I downgrade Ceph to Octopus?

    I have a cluster with 3 nodes with Ceph. I updated node 3 to Proxmox 7 when it lost network connectivity due to bonded LACP network settings (solved on by this thread https://forum.proxmox.com/threads/upgrade-to-proxmox-7-bond-lacp-interface-not-working-anymore.92060/). Before I found out about...
  10. P

    Proxmox VE 6.4 Installation Aborted

    sorry i'm not familiar with your physical form of your servers. but what I did was only by using the dedicated GPU, and the problem is gone. Your Xorg also terminated similar to mine.
  11. P

    Proxmox VE 6.4 Installation Aborted

    you can just use the dedicated GPU for installation process only. Once it's installed, you can unplug the GPU and use the onboard one again, and reattach you NIC
  12. P

    Proxmox VE 6.4 Installation Aborted

    what I did was attaching dedicated GPU/graphics card instead of using onboard. Try attaching any spare dedicated GPU
  13. P

    Proxmox VE 6.4 Installation Aborted

    Turns out Proxmox for some reason does not boot with onboard GPU on intel S1200BTS. I had to install spare Nvidia GPU and it was smooth sailing.
  14. P

    Proxmox VE 6.4 Installation Aborted

    Hi All, I am trying to install Proxmox VE 6.4 in this particular server, but installation was aborted and I am not really clear of what's wrong with it. The board is Intel S1200BTS with 8GB ram, CPU is Xeon e3 1220 v2. Attached is the error screen. Please help.
  15. P

    Proxmox + Ceph drive configuration

    Right. That's what I was confused about. Means that if I start to manually adjust weight, I won't get full capacity. Means I'm stuck with with two different pools, or one pool with NVMEs bottlenecked. I thought with the introduction of device class, ceph has the ability to fill up faster drive...
  16. P

    Proxmox + Ceph drive configuration

    One more question. If I increase the NVME weight, would that mean the NVME drives will reach near full (or full) ratios, this causing the whole pool to get stuck regardless of the SSD is still for example at 50% capacities?
  17. P

    Proxmox + Ceph drive configuration

    Thanks. Will play around with adjusting the weight of the NVMEs. Also the LAGG. Thanks again for the advice
  18. P

    Proxmox + Ceph drive configuration

    Thanks for your explanation. Does that mean if I combine 6 OSDs into one pool, the performance of the NVMEs won't be bottlenecked by the SSDs? Can Ceph automatically optimize the OSDs based on the class? I am aware of the 1Gbps limitation. I would plan to try it first see if the performance is...
  19. P

    Proxmox + Ceph drive configuration

    Hi everyone. I'm a newbie both in Proxmox and Ceph. I'm building home lab consisting of some old hardware consisting of 3 identical nodes. HP Z420 E5 2630L 32GB ram 1 NVMe 500gb (standard WD blue) 1 SATA SSD 500gb samsung evo 870 1 SATA SSD 120GB cheap boot disk I'm planning to implement HA on...