Recent content by jic5760

  1. J

    Windows 11 guest nested virtualization not working

    Same problem. Any workaround? I tried changing hv_* to several combinations but it didn't work.
  2. J

    Windows 10 VM So Slow when cpu type is host

    I have already read and followed that article. Both VMs have the same Windows Revision and same virtio drivers installed. As you can see in the configuration file, virtio scsi single is used.
  3. J

    Windows 10 VM So Slow when cpu type is host

    Windows 10 VM is too slow on Proxmox. Even dragging the window is too slow. I have two VMs with similar settings. But the second VM is not slow. The difference is, slow VM's cpu type is host , while fast VM's cpu type is qemu64 . VM1: agent: 1 balloon: 0 bios: ovmf boot...
  4. J

    IO is slow on the VM, but fast on the host.

    Thank you! Using virtio-blk has improved performance! config: ``` virtio1: datastore:vm-12205-disk-0,backup=0,cache=writeback,discard=on,iothread=1,size=120G ``` ``` # dd if=/dev/vda of=/dev/null bs=1M skip=2000 count=6000 iflag=direct,sync status=progress 6090129408 bytes (6.1 GB, 5.7 GiB)...
  5. J

    IO is slow on the VM, but fast on the host.

    I know dd is not a "good benchmark", but I want to make sure that sequence performance is good. ``` agent: 1 balloon: 40960 bios: ovmf boot: order=scsi0;ide2;net0 cores: 16 cpu: host,flags=+aes efidisk0: local-lvm,efitype=4m,pre-enrolled-keys=1,size=4M ide2...
  6. J

    IO is slow on the VM, but fast on the host.

    The physical disks are NVMe and provisioned with lvmthin. Very fast when benchmarking a thin-volume assigned to a vm on host . But in vm it is quite slow. Please advise for DISK IO performance. VM Options: ``` -drive...
  7. J

    avx2 cause kernel panic

    Adding xsave no longer panics! good!
  8. J

    avx2 cause kernel panic

    I just configure CPU as added avx cpu and booted. The OS was Ubuntu 20.04/22.04. It has nothing to do with MongoDB as it causes a kernel panic during the boot process. It works when the cpu is set as the host. As leesteken said, might be able to solve it with an extra flag. I just don't know...
  9. J

    Feature Suggestion: AVX/AVX2 CPU flags

    Theoretically, if you type exactly like that, the avx flag will be expanded on the existing kvm64. But this didn't work for me. A kernel panic occurs. https://forum.proxmox.com/threads/avx2-cause-kernel-panic.115206/
  10. J

    avx2 cause kernel panic

    cpu-models.conf: cpu-model: avx flags +avx;+avx2 phys-bits host hidden 0 hv-vendor-id proxmox reported-model kvm64 Adding the avx2 option causes a kernel panic. Both Ubuntu 20.04 & 22.04 are the same. Is there a bug in qemu?
  11. J

    Feature Suggestion: AVX/AVX2 CPU flags

    We can change the CPU to host to use AVX, but live migration is difficult in this case. AVX is supported by most server CPUs. Like aes, it would be nice to be able to use AVX while using kvm64. Example) AVX is required from MongoDB version 5 or later.
  12. J

    The VM dies with no logs.

    Are there any recent kernel or qemu modifications related to this?
  13. J

    notice: RRDC/RRD update error

    Huh... I shut down pvestatd on the node that says in the error for a few hours and the error went away. However, this has happened before, and I think there will be problems again sometime. Note that all servers are synchronized with NTP, usually with an error of fewer than 0.0001 seconds.
  14. J

    notice: RRDC/RRD update error

    There is only one pvestatd running on every node.
  15. J

    notice: RRDC/RRD update error

    This error message keeps filling up the disk. Killing the pvestatd on node-01 stops the error, but it is not a solution. Feb 9 13:30:38 pve-node-02112 pmxcfs[1788890]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-storage/node-01/local...