Search results

  1. M

    PVE 100%CPU on all kvm while vms are idle at 0-5% cpu

    Just let it your package manager handle for you: apt install pve-qemu-kvm_8.1.2-4 I recommend a reboot afterwards.
  2. M

    Full system encryption with network unlock?

    This make's literally no sense. You are afraid that somebody is breaking into your home and steal specifically your nuc because it is looking expensive? In the next sentence you have no problem when your server is running in a datacenter with hardware you probably don't even own and got your...
  3. M

    PVE 100%CPU on all kvm while vms are idle at 0-5% cpu

    The webinterface is not broken, your vm literally is creating that usage. I can proof it by the fact that my server under my desk started to screem because of the condition your screenshot is representing. qm suspend <vmid> && qm resume <vmid> made it quite again.
  4. M

    Full system encryption with network unlock?

    Whats the benefit of encrypting a probably 24/7 running system? I did it as well until somebody explained me how useless it is with something running 24/7 because everything is in memory anyway... As well I gave up on ZFS this year because the ressources in comparsion to gain is not a ratio.
  5. M

    Node crashes on migration of vm - host is PVE 8.0.4

    I give up. I destroyed the cluster and turned off my secondary node. The command pvecm delnode even sent the inbound node to grave and by that the inbound node still thinks it is member of the cluster.... It makes no sense operating such a fragile setup. Normally updates with Proxmox run so...
  6. M

    PVE 100%CPU on all kvm while vms are idle at 0-5% cpu

    I also can confirm that problem happens with direct_sync/native, it seem to be not io_uring bound. This happend during backup but luckily the cpu is only 50% not 100%.
  7. M

    Node crashes on migration of vm - host is PVE 8.0.4

    I am really confused. When I use writethrough/threads my node crashes immediatly as well. Only direct_sync/native works without crashing. The logs indicate definitely that the node just crashes inbound without any notice.
  8. M

    Node crashes on migration of vm - host is PVE 8.0.4

    I played a lot ping pong now and the node did not crashed anymore if io_uring is not used / disabled in vm. I am open for tips and instructions for debugging this issue.
  9. M

    Node crashes on migration of vm - host is PVE 8.0.4

    At least I have no indicator that my network falls apart anymore. The node is just "gone" and journalctl has nothing to offer. I conducting some tests right now. Two vm's reconfigured from no_cache/io_uring to direct_sync/native and I can throw them forth and back with no problems. This is...
  10. M

    Node crashes on migration of vm - host is PVE 8.0.4

    Hi Fiona, thanks for your answer I could figure out some things. First of all I had a design flaw in my networking. One node has two onboard gbit nics and the other node has a dual port gbit pcie card. Each nic is assigned to a dedicated bridge (vmbr11,vmbr12). Each bridge has an vlan...
  11. M

    PVE 100%CPU on all kvm while vms are idle at 0-5% cpu

    I can confirm the issue as well especially when I backup VM's. Since it is reprocuded I guess it is just a matter of time?
  12. M

    Node crashes on migration of vm - host is PVE 8.0.4

    I can't find anythin in the logs but have the same problem. Here is the migration log: Header Proxmox Virtual Environment 8.1.3 Virtual Machine 101801 (ip-10-1-80-1) on node 'ip-10-1-131-1' (migrate) No Tags Logs 2023-12-14 22:35:16 starting migration of VM 101801 to node 'ip-10-1-130-1'...
  13. M

    [SOLVED] Disk Error?

    USB is know to make problems. I had my Promox OS disk's connected to an internal USB 3.0 Y Port splitter and they lost multiple times the connection which resulted in very interesting feelings provided by my kernel. Also I connected once my 6TB disk via USB to my Proxmox Backup VM and it also...
  14. M

    Looking for advice on how to configure ZFS with my drives and use case

    I can just tell you that my NM790 do perform with zero issues. I also can hit 6GB/s since they are connected to PCIe4.0 even with my heavy encryption and 16 core chip I reach good 2GB/s with ease. Also you must be aware that Windows is not be known to be a performance wonder by default...
  15. M

    VLAN between vm's and host's not working

    Ignore this thread. My issues is my ConnectX-3 Network card and its vlan limitation. Is it possible to enhance the network reload procedure to clearify this limitation for specific network cards?
  16. M

    Looking for advice on how to configure ZFS with my drives and use case

    Define unstable, I have them running for a few months now and I had no issue yet. They are just one thing. Hella fast. Linux localhost 6.2.16-3-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-3 (2023-06-17T05:58Z) x86_64 GNU/Linux
  17. M

    EVPN SDN issues after Upgrade Proxmox VE from 7 to 8

    I have no time right now to analyze it further. I am sorry. Maybe it is working by this time. Just ignore me.
  18. M

    AMD Ryzen 7000 servers ?

    Pardon me but why do you have to replace your machine for a Proxmox upgrade? Just upgrade the host according to the upgrade documentation and you are fine. If you need advice for your rig I need more information about your requirements.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!