Search results

  1. J

    Windows 10 VM So Slow when cpu type is host

    I have already read and followed that article. Both VMs have the same Windows Revision and same virtio drivers installed. As you can see in the configuration file, virtio scsi single is used.
  2. J

    Windows 10 VM So Slow when cpu type is host

    Windows 10 VM is too slow on Proxmox. Even dragging the window is too slow. I have two VMs with similar settings. But the second VM is not slow. The difference is, slow VM's cpu type is host , while fast VM's cpu type is qemu64 . VM1: agent: 1 balloon: 0 bios: ovmf boot...
  3. J

    IO is slow on the VM, but fast on the host.

    Thank you! Using virtio-blk has improved performance! config: ``` virtio1: datastore:vm-12205-disk-0,backup=0,cache=writeback,discard=on,iothread=1,size=120G ``` ``` # dd if=/dev/vda of=/dev/null bs=1M skip=2000 count=6000 iflag=direct,sync status=progress 6090129408 bytes (6.1 GB, 5.7 GiB)...
  4. J

    IO is slow on the VM, but fast on the host.

    I know dd is not a "good benchmark", but I want to make sure that sequence performance is good. ``` agent: 1 balloon: 40960 bios: ovmf boot: order=scsi0;ide2;net0 cores: 16 cpu: host,flags=+aes efidisk0: local-lvm,efitype=4m,pre-enrolled-keys=1,size=4M ide2...
  5. J

    IO is slow on the VM, but fast on the host.

    The physical disks are NVMe and provisioned with lvmthin. Very fast when benchmarking a thin-volume assigned to a vm on host . But in vm it is quite slow. Please advise for DISK IO performance. VM Options: ``` -drive...
  6. J

    avx2 cause kernel panic

    Adding xsave no longer panics! good!
  7. J

    avx2 cause kernel panic

    I just configure CPU as added avx cpu and booted. The OS was Ubuntu 20.04/22.04. It has nothing to do with MongoDB as it causes a kernel panic during the boot process. It works when the cpu is set as the host. As leesteken said, might be able to solve it with an extra flag. I just don't know...
  8. J

    Feature Suggestion: AVX/AVX2 CPU flags

    Theoretically, if you type exactly like that, the avx flag will be expanded on the existing kvm64. But this didn't work for me. A kernel panic occurs. https://forum.proxmox.com/threads/avx2-cause-kernel-panic.115206/
  9. J

    avx2 cause kernel panic

    cpu-models.conf: cpu-model: avx flags +avx;+avx2 phys-bits host hidden 0 hv-vendor-id proxmox reported-model kvm64 Adding the avx2 option causes a kernel panic. Both Ubuntu 20.04 & 22.04 are the same. Is there a bug in qemu?
  10. J

    Feature Suggestion: AVX/AVX2 CPU flags

    We can change the CPU to host to use AVX, but live migration is difficult in this case. AVX is supported by most server CPUs. Like aes, it would be nice to be able to use AVX while using kvm64. Example) AVX is required from MongoDB version 5 or later.
  11. J

    The VM dies with no logs.

    Are there any recent kernel or qemu modifications related to this?
  12. J

    notice: RRDC/RRD update error

    Huh... I shut down pvestatd on the node that says in the error for a few hours and the error went away. However, this has happened before, and I think there will be problems again sometime. Note that all servers are synchronized with NTP, usually with an error of fewer than 0.0001 seconds.
  13. J

    notice: RRDC/RRD update error

    There is only one pvestatd running on every node.
  14. J

    notice: RRDC/RRD update error

    This error message keeps filling up the disk. Killing the pvestatd on node-01 stops the error, but it is not a solution. Feb 9 13:30:38 pve-node-02112 pmxcfs[1788890]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-storage/node-01/local...
  15. J

    The VM dies with no logs.

    First of all, if the smm=off option is added, no graphics are output, and neither ssh nor ping works. There are many important VMs on the node where the problem occurred this time, so I did not check it. I will check later. E5-2630 v3 is our latest hardware... :_( It's been working fine for...
  16. J

    The VM dies with no logs.

    This is before editing: /usr/bin/kvm -id 41010 -name k8s-stor-node-02063 -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/41010.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon...
  17. J

    The VM dies with no logs.

    Follow that comment, there is no graphic output as shown below and the VM does not turn on normally. Guest has not initialized the display (yet).
  18. J

    The VM dies with no logs.

    Sorry. Found the log. There is too much memory left, and there is no OOM log anywhere. # cat kern.log.1 Jan 25 12:01:56 XXX kernel: [1122677.458424] device tap41010i0 entered promiscuous mode Jan 25 12:01:56 XXX kernel: [1122677.528142] fwbr41010i0: port 1(tap41010i0) entered blocking state...
  19. J

    The VM dies with no logs.

    There is no sign of shutdown even inside the VM, and it was forced to stop. Both syslog/kern have no logs. Any way to find out the cause? Surely it will be again. version: pve-manager/7.1-7/df5740ad (running kernel: 5.13.19-2-pve)
  20. J

    `/var/lib/ceph/osd/ceph-<ID>/keyring` is gone.

    Solved # /usr/sbin/ceph-volume-systemd lvm-{osd_id}-{lvm_name(?)} # systemctl start ceph-osd@{osd_id}

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!