This make's literally no sense. You are afraid that somebody is breaking into your home and steal specifically your nuc because it is looking expensive? In the next sentence you have no problem when your server is running in a datacenter with hardware you probably don't even own and got your...
The webinterface is not broken, your vm literally is creating that usage. I can proof it by the fact that my server under my desk started to screem because of the condition your screenshot is representing. qm suspend <vmid> && qm resume <vmid> made it quite again.
Whats the benefit of encrypting a probably 24/7 running system? I did it as well until somebody explained me how useless it is with something running 24/7 because everything is in memory anyway...
As well I gave up on ZFS this year because the ressources in comparsion to gain is not a ratio.
I give up. I destroyed the cluster and turned off my secondary node. The command pvecm delnode even sent the inbound node to grave and by that the inbound node still thinks it is member of the cluster....
It makes no sense operating such a fragile setup. Normally updates with Proxmox run so...
I also can confirm that problem happens with direct_sync/native, it seem to be not io_uring bound.
This happend during backup but luckily the cpu is only 50% not 100%.
I am really confused. When I use writethrough/threads my node crashes immediatly as well. Only direct_sync/native works without crashing. The logs indicate definitely that the node just crashes inbound without any notice.
I played a lot ping pong now and the node did not crashed anymore if io_uring is not used / disabled in vm.
I am open for tips and instructions for debugging this issue.
At least I have no indicator that my network falls apart anymore. The node is just "gone" and journalctl has nothing to offer.
I conducting some tests right now. Two vm's reconfigured from no_cache/io_uring to direct_sync/native and I can throw them forth and back with no problems.
This is...
Hi Fiona,
thanks for your answer I could figure out some things.
First of all I had a design flaw in my networking.
One node has two onboard gbit nics and the other node has a dual port gbit pcie card.
Each nic is assigned to a dedicated bridge (vmbr11,vmbr12).
Each bridge has an vlan...
I can't find anythin in the logs but have the same problem.
Here is the migration log:
Header
Proxmox
Virtual Environment 8.1.3
Virtual Machine 101801 (ip-10-1-80-1) on node 'ip-10-1-131-1'
(migrate)
No Tags
Logs
2023-12-14 22:35:16 starting migration of VM 101801 to node 'ip-10-1-130-1'...
USB is know to make problems. I had my Promox OS disk's connected to an internal USB 3.0 Y Port splitter and they lost multiple times the connection which resulted in very interesting feelings provided by my kernel. Also I connected once my 6TB disk via USB to my Proxmox Backup VM and it also...
I can just tell you that my NM790 do perform with zero issues. I also can hit 6GB/s since they are connected to PCIe4.0 even with my heavy encryption and 16 core chip I reach good 2GB/s with ease. Also you must be aware that Windows is not be known to be a performance wonder by default...
Ignore this thread. My issues is my ConnectX-3 Network card and its vlan limitation. Is it possible to enhance the network reload procedure to clearify this limitation for specific network cards?
Define unstable, I have them running for a few months now and I had no issue yet. They are just one thing. Hella fast.
Linux localhost 6.2.16-3-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-3 (2023-06-17T05:58Z) x86_64 GNU/Linux
Pardon me but why do you have to replace your machine for a Proxmox upgrade? Just upgrade the host according to the upgrade documentation and you are fine.
If you need advice for your rig I need more information about your requirements.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.