Search results

  1. A

    [SOLVED] Odd RBD Feature

    All of my cluster is on the following. root@cephmon:~# ceph -v ceph version 16.2.9 (a569859f5e07da0c4c39da81d5fb5675cd95da49) pacific (stable) This entire cluster started on Nautilus so I would think I should be aok.
  2. A

    [SOLVED] Odd RBD Feature

    We have a ceph cluster with roughly 600 RBD's. 2 of the 600 randomly have a new feature which is breaking our backup's. root@cephmon:~# rbd info Cloud-Ceph1/vm-134-disk-0 rbd image 'vm-134-disk-0': size 1000 GiB in 256000 objects order 22 (4 MiB objects)...
  3. A

    Skip External VM's on Backup

    I have a small 2 node cluster for some random VM's that had a storage issue last night. Was hoping to utilize my VM backups, but found some VM's are being skipped during the backup job with the following. What would make these VM's external? This was all working about 2 months ago, then it...
  4. A

    Opt-in Linux 6.2 Kernel for Proxmox VE 7.x available

    I just found out that its actually happening in all situations with live migration. I agree, start/stop is great, but we have a cluster with almost 600 VM's and depend on live migration heavily for uptime. We have a mix of Intel 2nd/3rd Gen Xeon's. I can reproduce the issue going...
  5. A

    Opt-in Linux 6.2 Kernel for Proxmox VE 7.x available

    Wanted to report back. Did some more testing. Here is what I pinned down. - Slowness only happens on VM's which are migrated - VM is aok if its freshly started on a host running the newer 6.2.x kernel - I was in the process of upgrading packages and moving over to 6.2.x when I hit this bug...
  6. A

    Opt-in Linux 6.2 Kernel for Proxmox VE 7.x available

    Looks like the latest 6.2 kernel still has some major performance issues in our enviroment just like 6.1. We are seeing a load increase of 30-50% on pretty much all VM's running on hosts with the 6.2.x kernel If we go back to 5.15.x all is well. Check out the screen shot, our load has...
  7. A

    5.15.102-1-pve fails to boot on HP DL 560

    New kernel is working as expected. Appreciate it guys!
  8. A

    Intel E810 NIC's

    This ended up being a cable issue. However it does look like these NIC's don't allow proxmox to boot with the latest 5.15 and 6.x kernels.
  9. A

    5.15.102-1-pve fails to boot on HP DL 560

    It just so happens that this server also has Intel E810 based NIC's (Both 25G and 100G). They are working great with the ice drivers. Moving data as we speak over the E810 NIC's on the 5.15.74-1-pve kernel. .. root@gppctestprox:~# lspci | grep Ethernet 11:00.0 Ethernet controller: Intel...
  10. A

    5.15.102-1-pve fails to boot on HP DL 560

    Its a Gen 10 with 4x Intel Gold 6254's. Bios is a little older but not to bad (2022), might be one new revision. After lots of testing I have narrowed it down to the below NIC. 13:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)...
  11. A

    5.15.102-1-pve fails to boot on HP DL 560

    Thats it. It was a brand new install as well. It would sit for 5-10 minutes, then it would print those hung task messages and that was it. I let it sit for 10+ minutes, but there was nothing else after that. I reinstalled a 2nd time and same issue. If there is anything else I can provide...
  12. A

    5.15.102-1-pve fails to boot on HP DL 560

    The 5.15.102-1-pve kernel fails to boot on HP DL 560's. I haven't had a chance to test other hardware, but it definitely has issues on the 560. Hoping this kernel doesn't get released to the enterprise repo's as we have ALOT of 560's in production. Attached a screen shot of what happens...
  13. A

    Intel E810 NIC's

    Anyone else using any of these E810 based NICs (25G & 100G)? They are using the ice driver. They show up in proxmox and look ok, but I can't for the life of me get them to light up. Nothing in dmesg or any of the logs about transceiver mis matches. Figured id see if anyone else is...
  14. A

    New tool: pmmaint

    Alot of this can be done with groups as is.
  15. A

    5.15.x Kernel and Issues

    These are Gen2 Intels which from what I understand are not effected.
  16. A

    5.15.x Kernel and Issues

    Ended up running into a significant CPU performance isssue on the 6.1 kernel. Looks like we will be forced back to 5.13.x at this point. What a mess.
  17. A

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    Per t.lamprecht Those are in 5.15 too since end of July, i.e. pve-kernel-5.15.39-2.
  18. A

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    They are gen2's. Intel(R) Xeon(R) Gold 6254 CPU @ 3.10GHz
  19. A

    Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

    Any one else notice some major performance changes going from 5.15.x -> 6.1.x? Upgraded one of my heavy hitter front ends (Quad Socket DL 560 Gen10) and now we are seeing a major load increases. CPU load has almost doubled. Going back to 5.15.x has corrected the issue, but with live...
  20. A

    5.15.x Kernel and Issues

    Just wanted to mention. While getting some front ends moved to 6.1.x, I hit VM lockups on every VM going from a DL 560 Gen10 with 5.15.83 to a DL 560 Gen10 with 6.1.2-1. Both servers are identical. Once both were on 6.1.2-1 all was well. Appears to be more than just a generation issue...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!