You can rule out nameserver problems by using ping as a first step. What modem do you have? Is there any way that you can check connectivity between computer<->modem or modem<->internet independently?
What is the output of the following?
ip -c a
Could you perform an update of your system? Currently all repositories (including enterprise) should have version 6.2-14 of qemu-server. Otherwise we might be trying to fix something that has already changed.
There have been some changes to this part of the code recently, so it would be great to know your exact version numbers. Could you please post the following?
sed -n '660,680p' /usr/share/perl5/PVE/VZDump/QemuServer.pm
qm config 3010
mount | grep nfs...
Bringen die Windows Performance Tweaks aus der Wiki vielleicht etwas?
Könntest du für beide VMs die genaue VM-Konfiguration qm config <VMID> und einen Screenshot von der Leistungsüberwachung in Windows posten?
We use nested virtualization for development here, by the way.
It should be sufficient to have a single bridge on the host. The PVE reference documentation shows such a default configuration. You can then just repeat this for the nested hosts.
This is really important for Windows VMs as they usually don't have the required drivers. Please follow the Windows guest best practices to improve the performance of your VM after importing. I just added a big yellow note to that part of the wiki, so that those Windows steps are not overlooked...
Great that you could solve your problem! If you have a minute, it would be great if you could mark the thread as solved by editing your top post. This way others know what to expect.
There is also a short section about HyperV migrations in the wiki by the way.
I just tried this and it worked without much configuration. Did you follow something like this guide?
Could you try something like this?
showmount -e localhost # on the nfs server
showmount -e 10.25.10.0 # from somewhere else
mount -v -t nfs -o vers=3 10.25.10.0:/tank ~/nfstest...
Oracle VM VirtualBox supports exporting appliances to OVF format. I suggest doing that. With a little luck importing to Proxmox VE should then be a single command qm importovf.
For documentation you should take a look at our migration guide. VirtualBox is not explicitly mentioned there yet, but...
As @spirit mentioned, it is strongly recommended to have (physically) separated networks for storage and cluster. The faster the network the better. Unfortunately, with a 10G switch that handles both storage and cluster traffic for a 39 node cluster, I wouldn't be surprised about unsatisfying...
Great that you could solve the problem!
It would be nice if you could mark the thread as solved by editing your top post and changing the prefix. This way, others know what to expect when clicking on it.
We use Ubuntu LTS Kernels. Looking at the Ubuntu Releases, version 5.4 of the kernel should stay for a while in Proxmox VE. As a consequence, it is likely that version range 5.7 to 5.8.5 will not be used in Proxmox VE and that this bug about USB/IP will not come to light.