Anyone else having issues with the 6.2.x kernel and new Intel 4th gen servers?
https://forum.proxmox.com/threads/6-2-x-kernel-issues-on-new-hardware.132890/
We do have the pdpe1gb feature enabled for all of our guests. Ours are VM's as well.
Definitely something major changed in the kernel for these environments.
Does that mean I shouldn't use io_uring for Async IO at all? The setting in the proxmox GUI itself?
It seems my only real options are.
- Change all VM's from Async IO - Default io_uring to Native or Threads
- Shutdown and start all VM's
- Then I can disk mirror all the disks to the new...
Is my only option to set all 600 VM's to native or threads, move the disks, then change them all back to io_urinng? This would require 2 sets of reboots for 600 VM's as well.
Just so I understand the issue better @aaron
Is io_uring ok ontop of iscsi+LVM type storage?
Or is the issue the disk mirror process when running between the two storages that can cause the bug?
I noticed that if the VM is powered down I can do the disk move and it still uses io_uring.
Can confirm, that restart pveproxy and pvedaemon made it so the GUI behaves like the API CLI.
Should I be looking to no longer use io_uring with iscsi based disks?
Looks like the same issue.
root@ccscloud3:~# pvesh create nodes/ccscloud3/qemu/173/move_disk --disk scsi2 --storage Cloud-Udarchive1
create full clone of drive scsi2 (Cloud-Ceph1:vm-173-disk-0)
storage migration failed: target storage is known to cause issues with aio=io_uring (used by current...
Hey all I am in the process of moving disks residing on Ceph over to iscsi based storage.
If I use the GUI to move the disk, all is well and its succesful.
If I use the CLI, it fails with the following.
root@ccscloud3:~# qm move_disk 119 scsi3 Cloud-Udarchive1
create full clone of drive...
We just got one of these bad boys in.
https://www.supermicro.com/en/products/system/mp/2u/sys-241e-tnrttp
Has 4x Intel(R) Xeon(R) Gold 6448H CPU's.
Latest bios and firmware.
When using 6.2.16-4-bpo11-pve the server boots up with the following messages in dmesg.
[Tue Aug 29 05:37:56...
Here it is.
Unpacking pve-qemu-kvm (7.2.0-8) over (7.1.0-4)
I did not have a chance to reboot the front end back to 5.13.x this week, hoping to get to it early next week.
I actually have updated all my front ends, just haven't rebooted some of them for the changes to take effect.
Is there a easy way to figure out what version they are running right now or was previously installed?
I did do some package updates when I moved to the newer kernel. Its worth me testing to go back to 5.13.x and see how it runs.
I was able to get it to do a little better by bumping the following settings.
KSM_NPAGES_BOOST=500000000
KSM_NPAGES_MIN=2000000000
KSM_NPAGES_MAX=3000000000
I...
Here is the system were ksm is broken.
root@ccscloud4:~# for i in /sys/kernel/mm/ksm/*; do echo "$i:"; cat $i; done
/sys/kernel/mm/ksm/full_scans:
7635
/sys/kernel/mm/ksm/max_page_sharing:
256
/sys/kernel/mm/ksm/merge_across_nodes:
1
/sys/kernel/mm/ksm/pages_shared:
447156...
On the latest packages and kernel.
root@ccscloud4:~# pveversion
pve-manager/7.4-16/0f39f621 (running kernel: 6.2.16-4-bpo11-pve)
ksmtuned.conf was adjusted for when it kicks in (KSM_THRES_COEF=50) other than that its pretty basic.
root@ccscloud4:~# pveversion
pve-manager/7.4-16/0f39f621...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.