Hi!
Oh, my bad, didn't notice that.
I used default kernel, 5.13, I thought 5.15 is needed for your particular NIC.
Today will try to update the kernel. And then will try BIOS boot.
But I still not sure how the OS suppose to get the ip configuration for ISCSI to work after the boot.
Do I need...
Nice post! The one i looking for!
But I could not succeed. Are you doing BIOS or UEFI setup?
I ended up booting from iscsi target, but it seems no ip config transferred from UEFI to system.
So i get:
Cannot process volume group pve
Volume group "pve" not found
(initramfs) _
Than all of a sudden at 12:45 everything become good again.
Strange thing I notice in mon logs - mon1 forms quorum with mon2 and mon3.
And mon3 forms quorum with mon2. Mon2 never become leader.
Is something terribly wrong in my cluster? I tried to leave most things by default.
I've...
It is Saturday again...
And Ceph has become unresponsive.
And monitor logs flooded with this:
2021-10-02 11:57:50.439 7f468f2bc700 1 mon.hds01-pipcephn2@2(electing) e11 handle_auth_request failed to assign global_id
2021-10-02 11:57:50.463 7f468f2bc700 1 mon.hds01-pipcephn2@2(electing) e11...
Hi!
No, there are no backups configured. Almost all VMs are terminal farm nodes for work from home thingy.
Network is build on LACP bond and Open vSwitch.
*ceph* nodes have 1 bond - for Ceph.
*pve* nodes have 2 bonds - one for Ceph and one for VMs.
vlans are isolated, only cluster nodes in them...
Hi everyone!
I'm having this strange issue with Ceph on my Proxmox cluster.
The *ceph* nodes have OSD, mon, mds and mgr on each of them.
The *pve* nodes have only OSD on them.
Recently every Saturday exactly at 06:00 (±couple minutes) ceph goes to Warning state.
Logs show that one of the tree...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.