Search results

  1. A

    Opt-in Linux 6.5 Kernel with ZFS 2.2 for Proxmox VE 8 available on test & no-subscription

    Bummer. Tested 6.5 on one of my new SuperMicro front ends with 4x Intel Xeon Gold 6448H. VM locks up under load with CPU's stuck. I do run zfs on root with 2 Micron 5400 Pro's. Server. https://www.supermicro.com/en/products/system/mp/2u/sys-241e-tnrttp VM storage is on HPE Alletra NVMe...
  2. A

    Opt-in Linux 6.5 Kernel with ZFS 2.2 for Proxmox VE 8 available on test & no-subscription

    Nice work guys, very eager to see if this solves alot of the issues we have going on with KSM and performance. I will be reporting back!
  3. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    IMO most of us running large real production clusters are having to many issues on any of the 5.15.x and 6.2.x kernels. Its been a mess. KSM is and always has been a solid tool, I would have known far sooner about KSM if I didn't move to 5.15.x and realize that live migration was completely...
  4. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    What can be done to prevent this in the future? KSM is critical to production for so many environments. IMO PVE8 isn't production ready with this issue.
  5. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    @fiona @aaron Any luck on the new dual socket server for testing?
  6. A

    Proxmox VE 8.0 released!

    Anyone else having issues with the 6.2.x kernel and new Intel 4th gen servers? https://forum.proxmox.com/threads/6-2-x-kernel-issues-on-new-hardware.132890/
  7. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    Back to 5.13.x I go. Between 5.15.x live migration issues and 6.2.x KSM issues, this is rough for production.
  8. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    We do have the pdpe1gb feature enabled for all of our guests. Ours are VM's as well. Definitely something major changed in the kernel for these environments.
  9. A

    [SOLVED] Disk Moves GUI vs CLI differences

    Does that mean I shouldn't use io_uring for Async IO at all? The setting in the proxmox GUI itself? It seems my only real options are. - Change all VM's from Async IO - Default io_uring to Native or Threads - Shutdown and start all VM's - Then I can disk mirror all the disks to the new...
  10. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    That same front end I posted would only get roughly 20G of KSM on 6.2.x.
  11. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    Went back to the 5.13.19-6 kernel and ksm is like night and day. Not even up an hour and allready at 100G+ of KSM sharing.
  12. A

    [SOLVED] Disk Moves GUI vs CLI differences

    Is my only option to set all 600 VM's to native or threads, move the disks, then change them all back to io_urinng? This would require 2 sets of reboots for 600 VM's as well.
  13. A

    [SOLVED] Disk Moves GUI vs CLI differences

    Just so I understand the issue better @aaron Is io_uring ok ontop of iscsi+LVM type storage? Or is the issue the disk mirror process when running between the two storages that can cause the bug? I noticed that if the VM is powered down I can do the disk move and it still uses io_uring.
  14. A

    [SOLVED] Disk Moves GUI vs CLI differences

    Can confirm, that restart pveproxy and pvedaemon made it so the GUI behaves like the API CLI. Should I be looking to no longer use io_uring with iscsi based disks?
  15. A

    [SOLVED] Disk Moves GUI vs CLI differences

    Looks like the same issue. root@ccscloud3:~# pvesh create nodes/ccscloud3/qemu/173/move_disk --disk scsi2 --storage Cloud-Udarchive1 create full clone of drive scsi2 (Cloud-Ceph1:vm-173-disk-0) storage migration failed: target storage is known to cause issues with aio=io_uring (used by current...
  16. A

    [SOLVED] Disk Moves GUI vs CLI differences

    Hey all I am in the process of moving disks residing on Ceph over to iscsi based storage. If I use the GUI to move the disk, all is well and its succesful. If I use the CLI, it fails with the following. root@ccscloud3:~# qm move_disk 119 scsi3 Cloud-Udarchive1 create full clone of drive...
  17. A

    6.2.x Kernel Issues on New Hardware

    We just got one of these bad boys in. https://www.supermicro.com/en/products/system/mp/2u/sys-241e-tnrttp Has 4x Intel(R) Xeon(R) Gold 6448H CPU's. Latest bios and firmware. When using 6.2.16-4-bpo11-pve the server boots up with the following messages in dmesg. [Tue Aug 29 05:37:56...
  18. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    Here it is. Unpacking pve-qemu-kvm (7.2.0-8) over (7.1.0-4) I did not have a chance to reboot the front end back to 5.13.x this week, hoping to get to it early next week.
  19. A

    KSM Memory sharing not working as expected on 6.2.x kernel

    I actually have updated all my front ends, just haven't rebooted some of them for the changes to take effect. Is there a easy way to figure out what version they are running right now or was previously installed?