Search results

  1. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    @fiona @aaron Any luck on the new dual socket server for testing?
  2. A

    Proxmox VE 8.0 released!

    Anyone else having issues with the 6.2.x kernel and new Intel 4th gen servers? https://forum.proxmox.com/threads/6-2-x-kernel-issues-on-new-hardware.132890/
  3. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    Back to 5.13.x I go. Between 5.15.x live migration issues and 6.2.x KSM issues, this is rough for production.
  4. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    We do have the pdpe1gb feature enabled for all of our guests. Ours are VM's as well. Definitely something major changed in the kernel for these environments.
  5. A

    [SOLVED] Disk Moves GUI vs CLI differences

    Does that mean I shouldn't use io_uring for Async IO at all? The setting in the proxmox GUI itself? It seems my only real options are. - Change all VM's from Async IO - Default io_uring to Native or Threads - Shutdown and start all VM's - Then I can disk mirror all the disks to the new...
  6. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    That same front end I posted would only get roughly 20G of KSM on 6.2.x.
  7. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    Went back to the 5.13.19-6 kernel and ksm is like night and day. Not even up an hour and allready at 100G+ of KSM sharing.
  8. A

    [SOLVED] Disk Moves GUI vs CLI differences

    Is my only option to set all 600 VM's to native or threads, move the disks, then change them all back to io_urinng? This would require 2 sets of reboots for 600 VM's as well.
  9. A

    [SOLVED] Disk Moves GUI vs CLI differences

    Just so I understand the issue better @aaron Is io_uring ok ontop of iscsi+LVM type storage? Or is the issue the disk mirror process when running between the two storages that can cause the bug? I noticed that if the VM is powered down I can do the disk move and it still uses io_uring.
  10. A

    [SOLVED] Disk Moves GUI vs CLI differences

    Can confirm, that restart pveproxy and pvedaemon made it so the GUI behaves like the API CLI. Should I be looking to no longer use io_uring with iscsi based disks?
  11. A

    [SOLVED] Disk Moves GUI vs CLI differences

    Looks like the same issue. root@ccscloud3:~# pvesh create nodes/ccscloud3/qemu/173/move_disk --disk scsi2 --storage Cloud-Udarchive1 create full clone of drive scsi2 (Cloud-Ceph1:vm-173-disk-0) storage migration failed: target storage is known to cause issues with aio=io_uring (used by current...
  12. A

    [SOLVED] Disk Moves GUI vs CLI differences

    Hey all I am in the process of moving disks residing on Ceph over to iscsi based storage. If I use the GUI to move the disk, all is well and its succesful. If I use the CLI, it fails with the following. root@ccscloud3:~# qm move_disk 119 scsi3 Cloud-Udarchive1 create full clone of drive...
  13. A

    6.2.x Kernel Issues on New Hardware

    We just got one of these bad boys in. https://www.supermicro.com/en/products/system/mp/2u/sys-241e-tnrttp Has 4x Intel(R) Xeon(R) Gold 6448H CPU's. Latest bios and firmware. When using 6.2.16-4-bpo11-pve the server boots up with the following messages in dmesg. [Tue Aug 29 05:37:56...
  14. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    Here it is. Unpacking pve-qemu-kvm (7.2.0-8) over (7.1.0-4) I did not have a chance to reboot the front end back to 5.13.x this week, hoping to get to it early next week.
  15. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    I actually have updated all my front ends, just haven't rebooted some of them for the changes to take effect. Is there a easy way to figure out what version they are running right now or was previously installed?
  16. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    I did do some package updates when I moved to the newer kernel. Its worth me testing to go back to 5.13.x and see how it runs. I was able to get it to do a little better by bumping the following settings. KSM_NPAGES_BOOST=500000000 KSM_NPAGES_MIN=2000000000 KSM_NPAGES_MAX=3000000000 I...
  17. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    Here is the system were ksm is broken. root@ccscloud4:~# for i in /sys/kernel/mm/ksm/*; do echo "$i:"; cat $i; done /sys/kernel/mm/ksm/full_scans: 7635 /sys/kernel/mm/ksm/max_page_sharing: 256 /sys/kernel/mm/ksm/merge_across_nodes: 1 /sys/kernel/mm/ksm/pages_shared: 447156...
  18. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    Got it. root@ccscloud4:~# pveversion -v proxmox-ve: 7.4-1 (running kernel: 6.2.16-4-bpo11-pve) pve-manager: 7.4-16 (running version: 7.4-16/0f39f621) pve-kernel-6.2: 7.4-4 pve-kernel-5.15: 7.4-4 pve-kernel-5.13: 7.1-9 pve-kernel-6.2.16-4-bpo11-pve: 6.2.16-4~bpo11+1 pve-kernel-6.2.11-2-pve...
  19. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    On the latest packages and kernel. root@ccscloud4:~# pveversion pve-manager/7.4-16/0f39f621 (running kernel: 6.2.16-4-bpo11-pve) ksmtuned.conf was adjusted for when it kicks in (KSM_THRES_COEF=50) other than that its pretty basic. root@ccscloud4:~# pveversion pve-manager/7.4-16/0f39f621...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!