Search results

  1. R

    QEMU 9.2 available on pvetest and pve-no-subscription as of now

    "Does QEMU 9.2.0 affect the CPU usage of Windows?" if you use the "host" profile -> NO But other Profiles should be updated either -> so 99,999% no either. "Does disabling HPET timer affect the performance of Windows virtual machines" Dunno, some years ago where Intel had broken timers...
  2. R

    QEMU 9.2 available on pvetest and pve-no-subscription as of now

    sorry, i missreaded. but replyed you on the other thread.
  3. R

    Ugreen dxp4800+ iGPU 99% success (Win Server 2025)

    IGPU needs at boot some firmware injected. [drm] Finished loading DMC firmware i915/dg2_dmc_ver2_08.bin [drm] GT0: GuC firmware i915/dg2_guc_70.bin version 70.36.0 [drm] GT0: HuC firmware i915/dg2_huc_gsc.bin version 7.10.16 It maybe the reason youre getting issues. Im not sure if this happens...
  4. R

    IPv6 problems (maybe from proxmox!)

    Try to disable multicast snooping on the bridge. I need todo the same for example to get ospf properly working, ospf works without either but not reliable for me on opnsense for example. auto vmbr0 iface vmbr0 inet static address 172.20.1.6/24 gateway 172.20.1.1...
  5. R

    QEMU 9.2 available on pvetest and pve-no-subscription as of now

    Oh just for the sake of completeness, everywhere i use passthrough, im using q35. Im using q35 even for almost everything, i think there may be 2-3 i440fx vms out of ~40-50. That time i started to passthrough devices it wasnt possible with i440fx, there was at that time some bug on freebsd...
  6. R

    Issue with INTEL X520 NIC Adapter

    PS, i forgot, can you change error correction mode on your switch? If yes, try to switch between them and check, maybe the module comes up then. I had one case, where changing fec brought a link up again.
  7. R

    Issue with INTEL X520 NIC Adapter

    ethtool --module-info enp1s0f1np1 Identifier : 0x03 (SFP) Extended identifier : 0x04 (GBIC/SFP defined by 2-wire interface ID) Connector : 0x07 (LC) Transceiver codes ...
  8. R

    SFP+ 10G network card for a new Proxmox setup

    We/i dont have anymore any x520 cards, however if you want transceiver recommendations, it's hard, because almost everything worked here great. For the most part, we use flashable transceivers from Flexoptix. Those are amazing, because you can reflash them to any brand you need. If it has to be...
  9. R

    ZFS Arc using 50% of ram

    if you added your new values to /etc/modprobe.d/zfs.conf (or any other file in modprobe.d) you need to update initramfs and update-grub or proxmox-boot-tool refresh (or simply both) then reboot. One of those you have clearly missed :-)
  10. R

    Cannot update windows 11 to 24H2 - CPU not supported

    Everyone should simply use "host" as CPU. The only exception is, if you run a Cluster and all nodes in the Cluster doesn't have the identical CPU Generation. Funny thing is, i have only a single VM in one of my Clusters, that is not working with host as CPU and freezes during migration. On all...
  11. R

    RDP issue after latest Windows Update on Windows 11 VM

    There is no fix. Disabling Wallpapers, changing group policys etc, nothing will help. Its simply a bug, i have the same issue on all Windows 11 Instances (Physical and VM's) where you have more as one Account on the PC/VM. There is no fix. And everyone has the same issue! BTW, This happens...
  12. R

    HA + performance for cluster and migration

    Yeah it does only in one case, if you migrate multiple VM's/Containers at the same time. But in real life thats almost never the case (at least i never use that), i usually migrate only one vm. If you Shutdown one Server from a Cluster that is in HA, it does migrate only one VM at once either...
  13. R

    HA + performance for cluster and migration

    Lacp for redundancy, but not for Performance. The only cool thing that you can do is disabling encryption for the Cluster/Migration Network. That makes migration at least twice as fast, but i would do it only if you make a separate closed vlan on the switches. (So that its still safe) I have a...
  14. R

    QEMU 9.2 available on pvetest and pve-no-subscription as of now

    It's funny, im using pvetest since forever and never had any issues or bugs. Using qemu 9.2 since a week, - changed all Windows-VM's to 9.2 / Linux-VM's are anyway automatically on 9.2 - Have some VM's with passthrough, for NICS's and GPU And everything runs absolutely Perfect on all 11...
  15. R

    NoVNC console mouse way off from local mouse

    I have the same issue on all 7 Proxmox Servers, but only with W11 VM's (i think since the 24h2 update). On all W7/W10/Server 2016/2019/2022, there aren't any issues and everything is Perfect.
  16. R

    Poor VM performance

    Can you try to Pin all your VM-Cores to the same Socket on your host? numactl --hardware So that all Cores of VM1 runs on Socket1, VM2 on Socket2 and so on? Dont forget to shutdown/start the VM or LXC Container after you changed the config. Just for testing, to rule that out.
  17. R

    Poor VM performance

    Do you use primarycache=metadata or something else on zfs?
  18. R

    DUAL Socket Systems

    Sure i readed that and tryed almost anything on that AMD sheet. About intel, im curious either, but as far i seen the new intel chiplet design is using some sort of a silicon base for the interconnect, which is very expensive but very fast. So i have the fear that it wouldn't be an issue on...
  19. R

    DUAL Socket Systems

    numastat hit/misses doesnt tell the whole story sadly, we had this already. I believe that the host-kernel (numastat) cannot track anything inside a VM behind the guest kernel, but thats only an assumption so far. i knew i would start this stupid discussion again after my initial reply, but...
  20. R

    DUAL Socket Systems

    This would be true if the Kernel would know that vCPUs (Tasks) of a VM are belonging together. But in our case the Kernel is not aware and the vCPU tasks of a single VM starting randomly spreaded across Numa-Nodes. For example if you configure the VM with 4 vCPUs, those Tasks on the Host wont...