Search results

  1. Ceph osd's randomly very high latency

    Hello We have the problen that randomly one osd is going into very high latency (100-200ms) while te others stay below 2ms (all ssd micron max 5100) . After a restart of the osd (another problem this always takes 30 minutes after long time running and only seconds after restarting directly...
  2. Mixing Proxmox 7.0 and 7.1 backups not working

    trying to access the backup page from a 7.0 shows no backup jobs. seems that 7.1 is in noy way compatible to 7.0
  3. Mixing Proxmox 7.0 and 7.1 backups not working

    Hello, we upgraded some of our vms of the cluster. (still on progress) We moved some machines around so i unchecked and checked al vms on datancenter -> backup -> node(x) -> edit (all nodes have enabled in checkbox also) I rechecked all vms are selected and the button with vms not in backup is...
  4. Move EFI disk fails on between (RBD storage and RBD storage) or (RBD storage and lvm-thin) while VM is running

    is this a pve bug or qemu/kvm? still some important feature that is not working specially as ceph is now a main storage feature in proxmox...
  5. Ldap verification only for some domains

    @frank1 we just made a new who object: lets say "nonldap" and add all non ldap domains in this object. after this we copied all rules we allready had IN FRONT of the existing rules (higher number) and added the who object "nonldap" to all of these rules. finally the last "nonldap" rule BEFORE...
  6. PVE7 / PBS2 - Backup Timeout (qmp command 'cont' failed - got timeout)

    should the error : "qmp command 'cont' failed - got timeout" be fixed by now. (in proxmox 7.1) ? i know it occurs only on high loaded storage backends. but that was not a problem in proxmox 6.4
  7. [TUTORIAL] VGPU Step by step guide needed for noobs Proxmox VE 7.X

    i just gave esxi a fast try with the same hardware and bios config - and it works without problems. maybe i will try another day with other mainboard and kvm/proxmox. i would love to get it running because all infrastructure is proxmox...
  8. [TUTORIAL] VGPU Step by step guide needed for noobs Proxmox VE 7.X

    the only thing i can set to legacy is the boot. nothing else to do here. i found some more posts about people who could not get firepro s7150 to work with gen9 dl360. maybe its not possible...
  9. [TUTORIAL] VGPU Step by step guide needed for noobs Proxmox VE 7.X

    meanwhile i managed to get a q35 with uefi wokring,. but same same. at the moment the driver n the guest vm is installed the host crashes...
  10. [TUTORIAL] VGPU Step by step guide needed for noobs Proxmox VE 7.X

    its an hp dl360 gen9. guest driver is: https://www.amd.com/en/support/professional-graphics/firepro/firepro-s-series/firepro-s7150-active-cooling "KVM Open Source" "Guest Driver for KVM Open Source", currently 20.Q2.2 but i get also the error: gim error:(init_register_init_state:3624)...
  11. [TUTORIAL] VGPU Step by step guide needed for noobs Proxmox VE 7.X

    my problem seems similar to this. https://github.com/GPUOpen-LibrariesAndSDKs/MxGPU-Virtualization/issues/13 but i am already in legacy boot modus. where can i change the rom to "legacy" on hp gen9?
  12. [TUTORIAL] VGPU Step by step guide needed for noobs Proxmox VE 7.X

    i tried with https://github.com/flumm/MxGPU-Virtualization/tree/kernel5.11 and also stock version. i can see al configured virtual cards. bu at the moment when i try to install the driver on the vm the host crashes. is there any suggestion of configuration. i dont know if i am just not lucky...
  13. [TUTORIAL] VGPU Step by step guide needed for noobs Proxmox VE 7.X

    did anyone get this running on proxmox 7 with an hp proliant g9? i get errors and more errors... and when i archive to load the windows driver the vm crashes or the host crashes...
  14. iSCSI Portal single Point of Failure

    Did anything change in PVE 7? Or any plans to multipath for the storage config? IN Hyper-V and VMware its possible. As this is an Proxmox issue and not Linux/KVM it should be possible to fix it.
  15. Why does the gui not support bridges with vlan like in the doku?

    yes but this breaks communication if the cluster for example has a vlan and you configure a vm (for example a reverse proxy for the cluster ) with the same vlan id. then it is mandatory to use bridgeName{V}VlanId. This is also documented in the proxmox WIKI. We cannot use Vlan aware bridges...
  16. Why does the gui not support bridges with vlan like in the doku?

    In the doku it is written that you can configure: vmbr0v5. This is not possible in the gui. https://pve.proxmox.com/wiki/Network_Configuration i know i can edit it manually. but whould be nice to have this supported in the gui also.
  17. Linux Bridge reassemble fragmented packets

    on the network in general it is not good to have fragmentation. but if the vm has a MTU of 1500 it has to fragment the packages. also ovre the internet it will never get transportet with jumbo frames. so thats the problem. it is reasambled by netfilter and then never fragmentet to the original...
  18. Linux Bridge reassemble fragmented packets

    just a stupid question. as vlan ware does not work for me i tried to change one host. as i want the cluster & management in a vlan i want do make vmbr0v200 with the ip but i am not able to create a bridge with that name in the gui.... (its red not allowed to press the button because name is not...
  19. Linux Bridge reassemble fragmented packets

    hmmm yes very hard to find out. but seems to be a bug. strange that not many people report this. but it is 100% reproducible. like that vlan aware bridges will allways make problems when guest vms send packages bigger 1500. of course one can set the mtu higher at the bond device after the bridge...
  20. Linux Bridge reassemble fragmented packets

    Hi we experienced similar problems. Sending packages < 1500 where never fragmented again (after beeing reassembled) and where droped this ONLY happens for us if 1) we use vlan aware bridge 2) VM is VLAN tagged so if the vm (tap device) is not vlan tagged or we use normal bridges (not vlan aware...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!