Afternoon,
I posted this in the proxmox 8 thread but it's pushed way down now and I can't figure it out.
For some reason the new kernel won't install.
===| # apt install proxmox-ve |===
# dpkg-reconfigure pve-kernel-6.2.16-4-pve
/usr/sbin/dpkg-reconfigure: pve-kernel-6.2.16-4-pve is broken or...
Neobin,
I can get myself into trouble with Linux but not always out of it.
How can I fix this problem that I'm in? Would it be better to scrap this idea and stick with 7? I've only done 1 of 4 nodes, I'm assuming this issue will happen with the other 4. If I can get this fixed and running on...
Ran into an issue. I haven't rebooted yet as this seems fairly major...you know, broken mismatched kernel stuff:
Preparing to unpack .../00-dkms_3.0.10-8_all.deb ...
Unpacking dkms (3.0.10-8) over (2.8.4-3) ...
dpkg: warning: unable to delete old directory...
Looking at the bug tracker there hasn't been any update on this for 22 days.
Is there a walk through on how to create new pools and move data to the new pool in a production environment? I was hoping there would be an update to proxmox to have this fixed but it seems some manual intervention is...
Ahhhh, lightbulb moment - So CEPH is trying to do CEPH stuff on the same network as the nodes are trying to serve the VMs. Never noticed that in the config and never thought about it either.
How hard is that to change? Get rid of the active/backup network, configure the 2nd nic on...
Boooo, I thought I fixed that. We have a WHMCS plugin for Proxmox that for some reason wants to put the VM;s hostname that the client puts in the panel into the /etc/hosts file and it fricks things up on reboot...
Need to talk to their support about that...
Thanks!
@alexsky
I really appreciate this back and forth. Diving into lots of good stuff here :-D
I'm confused on your public/private network point
The two 40Gb infinniband ports (going to two different switches that are cross connected) are in an active/backup config with the primary NIC being NIC 1...
Sorry I should have specified, they are connected but I could never get these NICs to talk faster than 20Gb:
Both ports are:
root@pve1-cpu1:~# ethtool ibp5s0
Settings for ibp5s0:
Supported ports: [ ]
Supported link modes: Not reported
Supported pause frame use: No...
Feb 28 11:36:57 pve1-cpu1 corosync[1891]: [KNET ] pmtud: PMTUD link change for host: 3 link: 0 from 469 to 1397
Feb 28 11:36:57 pve1-cpu1 corosync[1891]: [KNET ] pmtud: Global data MTU changed to: 1397
Feb 28 11:36:58 pve1-cpu1 corosync[1891]: [KNET ] rx: host: 4 link: 0 is up
Feb 28...
So I restarted the LRM service, HA came on line and migrated no problem. Did systems updates, rebooted the node and now it's dead again. Based on these logs, any idea?
Feb 28 11:36:44 pve1-cpu1 systemd[1]: Starting The Proxmox VE cluster filesystem...
Feb 28 11:36:50 pve1-cpu1 pmxcfs[1740]...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.