Hello
We have the problen that randomly one osd is going into very high latency (100-200ms) while te others stay below 2ms (all ssd micron max 5100) .
After a restart of the osd (another problem this always takes 30 minutes after long time running and only seconds after restarting directly...
Hello,
we upgraded some of our vms of the cluster. (still on progress)
We moved some machines around so i unchecked and checked al vms on datancenter -> backup -> node(x) -> edit (all nodes have enabled in checkbox also)
I rechecked all vms are selected and the button with vms not in backup is...
@frank1
we just made a new who object: lets say "nonldap" and add all non ldap domains in this object.
after this we copied all rules we allready had IN FRONT of the existing rules (higher number) and added the who object "nonldap" to all of these rules.
finally the last "nonldap" rule BEFORE...
should the error : "qmp command 'cont' failed - got timeout" be fixed by now. (in proxmox 7.1) ?
i know it occurs only on high loaded storage backends. but that was not a problem in proxmox 6.4
i just gave esxi a fast try with the same hardware and bios config - and it works without problems. maybe i will try another day with other mainboard and kvm/proxmox. i would love to get it running because all infrastructure is proxmox...
the only thing i can set to legacy is the boot. nothing else to do here. i found some more posts about people who could not get firepro s7150 to work with gen9 dl360. maybe its not possible...
its an hp dl360 gen9.
guest driver is:
https://www.amd.com/en/support/professional-graphics/firepro/firepro-s-series/firepro-s7150-active-cooling
"KVM Open Source" "Guest Driver for KVM Open Source", currently 20.Q2.2
but i get also the error: gim error:(init_register_init_state:3624)...
my problem seems similar to this. https://github.com/GPUOpen-LibrariesAndSDKs/MxGPU-Virtualization/issues/13
but i am already in legacy boot modus. where can i change the rom to "legacy" on hp gen9?
i tried with https://github.com/flumm/MxGPU-Virtualization/tree/kernel5.11 and also stock version.
i can see al configured virtual cards. bu at the moment when i try to install the driver on the vm the host crashes. is there any suggestion of configuration. i dont know if i am just not lucky...
did anyone get this running on proxmox 7 with an hp proliant g9? i get errors and more errors... and when i archive to load the windows driver the vm crashes or the host crashes...
Did anything change in PVE 7? Or any plans to multipath for the storage config? IN Hyper-V and VMware its possible. As this is an Proxmox issue and not Linux/KVM it should be possible to fix it.
yes but this breaks communication if the cluster for example has a vlan and you configure a vm (for example a reverse proxy for the cluster ) with the same vlan id. then it is mandatory to use bridgeName{V}VlanId. This is also documented in the proxmox WIKI. We cannot use Vlan aware bridges...
In the doku it is written that you can configure: vmbr0v5. This is not possible in the gui.
https://pve.proxmox.com/wiki/Network_Configuration
i know i can edit it manually. but whould be nice to have this supported in the gui also.
on the network in general it is not good to have fragmentation. but if the vm has a MTU of 1500 it has to fragment the packages. also ovre the internet it will never get transportet with jumbo frames. so thats the problem. it is reasambled by netfilter and then never fragmentet to the original...
just a stupid question. as vlan ware does not work for me i tried to change one host. as i want the cluster & management in a vlan i want do make
vmbr0v200 with the ip but i am not able to create a bridge with that name in the gui.... (its red not allowed to press the button because name is not...
hmmm yes very hard to find out. but seems to be a bug. strange that not many people report this. but it is 100% reproducible. like that vlan aware bridges will allways make problems when guest vms send packages bigger 1500. of course one can set the mtu higher at the bond device after the bridge...
Hi we experienced similar problems.
Sending packages < 1500 where never fragmented again (after beeing reassembled) and where droped
this ONLY happens for us if
1) we use vlan aware bridge
2) VM is VLAN tagged
so if the vm (tap device) is not vlan tagged or we use normal bridges (not vlan aware...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.