Search results

  1. P

    Restore Single Disk from PBS

    This should be in official documentation. I detached the drives that I wanted to keep, restored the whole VM, and reattached the drives back. Then I found this instruction.
  2. P

    [SOLVED] possible bug proxmox 7, active-backup bond with VLAN aware bridge causes no internet

    What I did was just removing the auto eno1 auto eno2 etc, basically remove the auto command for each hardware interface. I did not even add the hwaddress command for the bond. There is a thread on this problem...
  3. P

    Proxmox 7 ceph pacific fresh install, can I downgrade Ceph to Octopus?

    I have a cluster with 3 nodes with Ceph. I updated node 3 to Proxmox 7 when it lost network connectivity due to bonded LACP network settings (solved on by this thread https://forum.proxmox.com/threads/upgrade-to-proxmox-7-bond-lacp-interface-not-working-anymore.92060/). Before I found out about...
  4. P

    Proxmox VE 6.4 Installation Aborted

    sorry i'm not familiar with your physical form of your servers. but what I did was only by using the dedicated GPU, and the problem is gone. Your Xorg also terminated similar to mine.
  5. P

    Proxmox VE 6.4 Installation Aborted

    you can just use the dedicated GPU for installation process only. Once it's installed, you can unplug the GPU and use the onboard one again, and reattach you NIC
  6. P

    Proxmox VE 6.4 Installation Aborted

    what I did was attaching dedicated GPU/graphics card instead of using onboard. Try attaching any spare dedicated GPU
  7. P

    Proxmox VE 6.4 Installation Aborted

    Turns out Proxmox for some reason does not boot with onboard GPU on intel S1200BTS. I had to install spare Nvidia GPU and it was smooth sailing.
  8. P

    Proxmox VE 6.4 Installation Aborted

    Hi All, I am trying to install Proxmox VE 6.4 in this particular server, but installation was aborted and I am not really clear of what's wrong with it. The board is Intel S1200BTS with 8GB ram, CPU is Xeon e3 1220 v2. Attached is the error screen. Please help.
  9. P

    Proxmox + Ceph drive configuration

    Right. That's what I was confused about. Means that if I start to manually adjust weight, I won't get full capacity. Means I'm stuck with with two different pools, or one pool with NVMEs bottlenecked. I thought with the introduction of device class, ceph has the ability to fill up faster drive...
  10. P

    Proxmox + Ceph drive configuration

    One more question. If I increase the NVME weight, would that mean the NVME drives will reach near full (or full) ratios, this causing the whole pool to get stuck regardless of the SSD is still for example at 50% capacities?
  11. P

    Proxmox + Ceph drive configuration

    Thanks. Will play around with adjusting the weight of the NVMEs. Also the LAGG. Thanks again for the advice
  12. P

    Proxmox + Ceph drive configuration

    Thanks for your explanation. Does that mean if I combine 6 OSDs into one pool, the performance of the NVMEs won't be bottlenecked by the SSDs? Can Ceph automatically optimize the OSDs based on the class? I am aware of the 1Gbps limitation. I would plan to try it first see if the performance is...
  13. P

    Proxmox + Ceph drive configuration

    Hi everyone. I'm a newbie both in Proxmox and Ceph. I'm building home lab consisting of some old hardware consisting of 3 identical nodes. HP Z420 E5 2630L 32GB ram 1 NVMe 500gb (standard WD blue) 1 SATA SSD 500gb samsung evo 870 1 SATA SSD 120GB cheap boot disk I'm planning to implement HA on...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!