Search results

  1. wolfgang

    Check of pool vmdata/vmstore failed (status:1). Manual repair required

    Hi, try to activate the LV manually. Proxmox VE 5 is EOL, so if you fixed the problem consider to upgrade. We can't help you with outdated versions.
  2. wolfgang

    Question regarding network bond config

    Yes, correct. Yes, correct.
  3. wolfgang

    Backup fails with org.freedesktop.DBus.Error.Disconnected: Connection is closed

    There could also other packages that overwhelmed the DBus. In a plain Proxmox VE installation this should not the case.
  4. wolfgang

    Is openvswitch-switch-dpdk completely broken?

    I have no information about this, but we do a general recheck of those technologies.
  5. wolfgang

    Disable promisc mode for a VM interface

    You can create a ticket at bugzilla where you describe your feature request.
  6. wolfgang

    No Network Interface found - Dell MX740c

    Hallo, Generell ist der Hardware Hersteller für die Treiber verantwortlich. Also in diesen Fall Marvell oder Dell. Dell bietet Support für RedHat und SUSE bei diese NIC. Bei uns ist der Treiber qla2xxx in der version 10.01.00.19-k im Kernel integriert. Die Marvell QL41232HMKR ist eine Karte...
  7. wolfgang

    Verzeichnis von Proxmox an Container weitergeben

    Hi, das kannst du mit bind-mounts machen siehe hierzu. https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pct_settings section 11.3.4 mountpoints suche nach bindmounts
  8. wolfgang

    Is openvswitch-switch-dpdk completely broken?

    You are correct. I was not aware we put it back to our repository. This was necessary because the new kernel makes it necessary. The DPDK package comes together with the others, but we will not support this function at the moment. DPDK is a very picky beast where also HW is involved, if you...
  9. wolfgang

    Best switch for Ceph cluster network

    Cumulus network switches can do that. But for ceph, you need no Layer 3+4. Layer 2+3 is quite good. Ceph has different src and different dest so MAC and IP work well.
  10. wolfgang

    Question regarding network bond config

    Hi, This does not work with LACP. You can increase the bandwidth to 2GB but the speed is always 2 x 1GBit. The first version is wrong. Thanks for pointing to this. the vmbr0 bridge-port must be another nic like eno3.
  11. wolfgang

    What sort of speed should i expect ?

    Hi, this is hard to say in general and defense on many factors. Blocksize, load on PBS and PVE, source storage ....
  12. wolfgang

    Disable promisc mode for a VM interface

    Hi, there is not building way but you could disable it manual with "ip link set tap<VMID>i<NIC NO> promisc off" This can be done automatically with the start/stop hook scripts. https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_hookscripts
  13. wolfgang

    [Error] Restoring Container fails with: extracting archive - error at entry "aquota.group": failed to set file attributes: EPERM: Operation not permit

    You must extract the "etc/vzdump/pct.conf" and edit it then you can put it back to this path. the format is tar
  14. wolfgang

    Proxmox 5 and unusual Disk I/O

    Hi, Proxmox VE 5 is EOL see https://forum.proxmox.com/threads/proxmox-ve-support-lifecycle.35755/ Please upgrade to Proxmox VE 6.2 if you like help.
  15. wolfgang

    How to setup firewall port limitations for containers and VM's

    Hi, I would create at the datacenter level a security group and add the needed rules to this group. Then you can use this group as a rule at the container
  16. wolfgang

    Intel GVT-g GPU and no other video card

    Hi, NoVNC needs a virtual Video card and spice need qlx card. But you can use RDP.
  17. wolfgang

    vGPU not passthrough

    Hi, AFIK this GPU does not support vfunktions for multiple VM. but here is a how-to for this purpose. https://pve.proxmox.com/wiki/MxGPU_with_AMD_S7150_under_Proxmox_VE_5.x
  18. wolfgang

    Hardware advice, homelab

    Hi, yes, it is ok but the memory is not much for virtualization KVM.
  19. wolfgang

    Help setting iser transport for ZFS on ISCSI

    Hi, I never test this but you could and report back. At the moment iscsi is hardcoded and no option for iser is possible. qemu uses libiscsi and libiscsi should able to use iser. qm showcmd <VMID> take the output and replace iscsi again iser. the option starts with -drive 'file=iscsi...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!