Recent content by MoreDakka

  1. M

    [SOLVED] pve7to8 broken kernel

    @Chris You are my hero for the day. Got the system booting properly again :-D Thanks!!!!
  2. M

    [SOLVED] pve7to8 broken kernel

    Do I need to move these files? root@pve1-cpu1:/var/lib/dpkg/info# ls -l |grep pve-kernel-6.2. -rw-r--r-- 1 root root 483862 Jul 21 13:22 pve-kernel-6.2.16-4-pve.list -rw-r--r-- 1 root root 626356 Jul 14 11:53 pve-kernel-6.2.16-4-pve.md5sums -rwxr-xr-x 1 root root 590 Jul 14 11:53...
  3. M

    [SOLVED] pve7to8 broken kernel

    Afternoon, I posted this in the proxmox 8 thread but it's pushed way down now and I can't figure it out. For some reason the new kernel won't install. ===| # apt install proxmox-ve |=== # dpkg-reconfigure pve-kernel-6.2.16-4-pve /usr/sbin/dpkg-reconfigure: pve-kernel-6.2.16-4-pve is broken or...
  4. M

    Proxmox VE 8.0 released!

    Neobin, I can get myself into trouble with Linux but not always out of it. How can I fix this problem that I'm in? Would it be better to scrap this idea and stick with 7? I've only done 1 of 4 nodes, I'm assuming this issue will happen with the other 4. If I can get this fixed and running on...
  5. M

    Proxmox VE 8.0 released!

    Ran into an issue. I haven't rebooted yet as this seems fairly major...you know, broken mismatched kernel stuff: Preparing to unpack .../00-dkms_3.0.10-8_all.deb ... Unpacking dkms (3.0.10-8) over (2.8.4-3) ... dpkg: warning: unable to delete old directory...
  6. M

    [SOLVED] Backup (vzsnap) fails after Update to ceph 17.2.6

    Looking at the bug tracker there hasn't been any update on this for 22 days. Is there a walk through on how to create new pools and move data to the new pool in a production environment? I was hoping there would be an update to proxmox to have this fixed but it seems some manual intervention is...
  7. M

    Yet another CEPH tuning question (comparing to dell san)

    @alexskysilk Thanks for all your help with this project, a bunch to take away from this.
  8. M

    Yet another CEPH tuning question (comparing to dell san)

    Ahhhh, lightbulb moment - So CEPH is trying to do CEPH stuff on the same network as the nodes are trying to serve the VMs. Never noticed that in the config and never thought about it either. How hard is that to change? Get rid of the active/backup network, configure the 2nd nic on...
  9. M

    Yet another CEPH tuning question (comparing to dell san)

    This is alll pretty much defaults that Proxmox/CEPH creates: # cat /etc/pve/ceph.conf [global] auth_client_required = cephx auth_cluster_required = cephx auth_service_required = cephx cluster_network = 192.168.1.81/24 fsid =...
  10. M

    lrm node1 (old timestamp - dead?)

    Boooo, I thought I fixed that. We have a WHMCS plugin for Proxmox that for some reason wants to put the VM;s hostname that the client puts in the panel into the /etc/hosts file and it fricks things up on reboot... Need to talk to their support about that... Thanks!
  11. M

    Yet another CEPH tuning question (comparing to dell san)

    @alexsky I really appreciate this back and forth. Diving into lots of good stuff here :-D I'm confused on your public/private network point The two 40Gb infinniband ports (going to two different switches that are cross connected) are in an active/backup config with the primary NIC being NIC 1...
  12. M

    Yet another CEPH tuning question (comparing to dell san)

    Sorry I should have specified, they are connected but I could never get these NICs to talk faster than 20Gb: Both ports are: root@pve1-cpu1:~# ethtool ibp5s0 Settings for ibp5s0: Supported ports: [ ] Supported link modes: Not reported Supported pause frame use: No...
  13. M

    lrm node1 (old timestamp - dead?)

    Feb 28 11:36:57 pve1-cpu1 corosync[1891]: [KNET ] pmtud: PMTUD link change for host: 3 link: 0 from 469 to 1397 Feb 28 11:36:57 pve1-cpu1 corosync[1891]: [KNET ] pmtud: Global data MTU changed to: 1397 Feb 28 11:36:58 pve1-cpu1 corosync[1891]: [KNET ] rx: host: 4 link: 0 is up Feb 28...
  14. M

    lrm node1 (old timestamp - dead?)

    So I restarted the LRM service, HA came on line and migrated no problem. Did systems updates, rebooted the node and now it's dead again. Based on these logs, any idea? Feb 28 11:36:44 pve1-cpu1 systemd[1]: Starting The Proxmox VE cluster filesystem... Feb 28 11:36:50 pve1-cpu1 pmxcfs[1740]...
  15. M

    lrm node1 (old timestamp - dead?)

    I restarted the LRM service and the Migrations started flowing again no problem. Would those logs mailly be under /var/log/syslog ? Thanks,

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!