HI,
we have 1 HP GEN10+ which makes strange problems. Sometimes a VM (just one of many) is very slow. Windows RD Server User complain it is very slow for example. The Vm shows under 50% CPU load at this time, the Host Server was at this time at around 65% CPU load.
The strange thing is that...
yes when we use the proxmox firewall we get the packet drops. but only when we also use vlan aware bridges. strange thing was that we did not have big troubles in general just one client who did sip calls with where the packages where fragemented going out of the vm allready and then got...
HI,
i tried to tag a VLAN inside a VM. But it seems there is no communication possible with other vms which have the VLAN tagged on the network interface on proxmox side (VM)
So is it only possibly to tag VLANS inside VMS and communivate with other VMS with VLAN tagged on the proxmox interaface...
Hello
We have the problen that randomly one osd is going into very high latency (100-200ms) while te others stay below 2ms (all ssd micron max 5100) .
After a restart of the osd (another problem this always takes 30 minutes after long time running and only seconds after restarting directly...
Hello,
we upgraded some of our vms of the cluster. (still on progress)
We moved some machines around so i unchecked and checked al vms on datancenter -> backup -> node(x) -> edit (all nodes have enabled in checkbox also)
I rechecked all vms are selected and the button with vms not in backup is...
@frank1
we just made a new who object: lets say "nonldap" and add all non ldap domains in this object.
after this we copied all rules we allready had IN FRONT of the existing rules (higher number) and added the who object "nonldap" to all of these rules.
finally the last "nonldap" rule BEFORE...
should the error : "qmp command 'cont' failed - got timeout" be fixed by now. (in proxmox 7.1) ?
i know it occurs only on high loaded storage backends. but that was not a problem in proxmox 6.4
i just gave esxi a fast try with the same hardware and bios config - and it works without problems. maybe i will try another day with other mainboard and kvm/proxmox. i would love to get it running because all infrastructure is proxmox...
the only thing i can set to legacy is the boot. nothing else to do here. i found some more posts about people who could not get firepro s7150 to work with gen9 dl360. maybe its not possible...
its an hp dl360 gen9.
guest driver is:
https://www.amd.com/en/support/professional-graphics/firepro/firepro-s-series/firepro-s7150-active-cooling
"KVM Open Source" "Guest Driver for KVM Open Source", currently 20.Q2.2
but i get also the error: gim error:(init_register_init_state:3624)...
my problem seems similar to this. https://github.com/GPUOpen-LibrariesAndSDKs/MxGPU-Virtualization/issues/13
but i am already in legacy boot modus. where can i change the rom to "legacy" on hp gen9?
i tried with https://github.com/flumm/MxGPU-Virtualization/tree/kernel5.11 and also stock version.
i can see al configured virtual cards. bu at the moment when i try to install the driver on the vm the host crashes. is there any suggestion of configuration. i dont know if i am just not lucky...
did anyone get this running on proxmox 7 with an hp proliant g9? i get errors and more errors... and when i archive to load the windows driver the vm crashes or the host crashes...
Did anything change in PVE 7? Or any plans to multipath for the storage config? IN Hyper-V and VMware its possible. As this is an Proxmox issue and not Linux/KVM it should be possible to fix it.
yes but this breaks communication if the cluster for example has a vlan and you configure a vm (for example a reverse proxy for the cluster ) with the same vlan id. then it is mandatory to use bridgeName{V}VlanId. This is also documented in the proxmox WIKI. We cannot use Vlan aware bridges...
In the doku it is written that you can configure: vmbr0v5. This is not possible in the gui.
https://pve.proxmox.com/wiki/Network_Configuration
i know i can edit it manually. but whould be nice to have this supported in the gui also.
on the network in general it is not good to have fragmentation. but if the vm has a MTU of 1500 it has to fragment the packages. also ovre the internet it will never get transportet with jumbo frames. so thats the problem. it is reasambled by netfilter and then never fragmentet to the original...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.