This should be in official documentation.
I detached the drives that I wanted to keep, restored the whole VM, and reattached the drives back. Then I found this instruction.
What I did was just removing the auto eno1 auto eno2 etc, basically remove the auto command for each hardware interface. I did not even add the hwaddress command for the bond.
There is a thread on this problem...
I have a cluster with 3 nodes with Ceph. I updated node 3 to Proxmox 7 when it lost network connectivity due to bonded LACP network settings (solved on by this thread https://forum.proxmox.com/threads/upgrade-to-proxmox-7-bond-lacp-interface-not-working-anymore.92060/). Before I found out about...
sorry i'm not familiar with your physical form of your servers. but what I did was only by using the dedicated GPU, and the problem is gone.
Your Xorg also terminated similar to mine.
you can just use the dedicated GPU for installation process only. Once it's installed, you can unplug the GPU and use the onboard one again, and reattach you NIC
Hi All,
I am trying to install Proxmox VE 6.4 in this particular server, but installation was aborted and I am not really clear of what's wrong with it.
The board is Intel S1200BTS with 8GB ram, CPU is Xeon e3 1220 v2.
Attached is the error screen.
Please help.
Right. That's what I was confused about. Means that if I start to manually adjust weight, I won't get full capacity. Means I'm stuck with with two different pools, or one pool with NVMEs bottlenecked.
I thought with the introduction of device class, ceph has the ability to fill up faster drive...
One more question. If I increase the NVME weight, would that mean the NVME drives will reach near full (or full) ratios, this causing the whole pool to get stuck regardless of the SSD is still for example at 50% capacities?
Thanks for your explanation. Does that mean if I combine 6 OSDs into one pool, the performance of the NVMEs won't be bottlenecked by the SSDs? Can Ceph automatically optimize the OSDs based on the class?
I am aware of the 1Gbps limitation. I would plan to try it first see if the performance is...
Hi everyone. I'm a newbie both in Proxmox and Ceph.
I'm building home lab consisting of some old hardware consisting of 3 identical nodes.
HP Z420
E5 2630L
32GB ram
1 NVMe 500gb (standard WD blue)
1 SATA SSD 500gb samsung evo 870
1 SATA SSD 120GB cheap boot disk
I'm planning to implement HA on...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.