Search results

  1. F

    [SOLVED] vm with ndb disks

    ok was quite easy :) scsi0: /dev/nbd0,size=100G
  2. F

    [SOLVED] vm with ndb disks

    Hi Is it possible to run VMs with ndb disks? Thanks Felix
  3. F

    iperf3 on 10G lots of Retr

    Here are the results, still not good. First dump is between pve1<->pve2, second is from vm-pve1<->vm-pve2 First run on pve1<->pve2 was without any retries, following tests with Thanks again Felix
  4. F

    iperf3 on 10G lots of Retr

    Thanks, here are the results - I have fast CPUs and lots of memory, so something else must be wrong From the same VM I get 28-306GB without any retries
  5. F

    iperf3 on 10G lots of Retr

    Please proxmox support :) - I just want to know if there is something I might could try before I change to OpenVSwitch Felix
  6. F

    iperf3 on 10G lots of Retr

    Any feedback from Proxmox support?
  7. F

    iperf3 on 10G lots of Retr

    this is without any bridge,vlan etc. just direct connected :( I will look at the OpenVSwitch, but is not preferred Felix
  8. F

    iperf3 on 10G lots of Retr

    Hi I am running proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve) on two hosts and have connected them together directly And when I test speeds with iperf3, I see a lot of "retr" values? pve1: auto enp94s0f1 iface enp94s0f1 inet static address 10.15.15.12 netmask...
  9. F

    bridge-pvid not working - but untag on switch does

    HI I have a weird issue. "bridge-pvid xxx" does not work on my vmbr0, but if I remove bridge-pvid and untag the xxx vlan on the switch LACP it works fine. Any clues? Thanks Felix
  10. F

    debootstrap bullseye

    Alright thanks will look at it
  11. F

    debootstrap bullseye

    Ok Thanks, can you give a hint on how to solve it?
  12. F

    debootstrap bullseye

    Tried is from PVE 7.x and same problem :( root@pve:/# cat /etc/debian_version 11.1 root@pve:/# uname -a Linux pve 5.11.22-4-pve #1 SMP PVE 5.11.22-8 (Fri, 27 Aug 2021 11:51:34 +0200) x86_64 GNU/Linux 01. mkdir /mnt/chroot 02. debootstrap bullseye /mnt/chroot 03. mount -t proc none...
  13. F

    debootstrap bullseye

    Ok let me try it from PVE 7.x :) - ups I forgot to include the repo Key on my list and I am using that of course Felix
  14. F

    debootstrap bullseye

    Really? great to hear, I tried it on a clean Ubuntu 20.x and a Debian 11.x Don't understand why it does not work for me. What additional steps did you do and what OS are u running it from? Thanks!
  15. F

    debootstrap bullseye

    Yes of course, here is my steps: 01. mkdir /mnt/chroot 02. debootstrap bullseye /mnt/chroot 03. mount -t proc none /mnt/chroot/proc 04. mount -t sysfs none /mnt/chroot/sys 05. mount --bind /dev /mnt/chroot/dev 06. chroot /mnt/chroot /bin/bash 07. cat <<EOF > $MNT_DIR/etc/hosts 127.0.0.1...
  16. F

    debootstrap bullseye

    Is there anything I can check do? - my goal is to be able to provision a PVE cluster remote with automation and the ISO installer don't have any automation option right?
  17. F

    debootstrap bullseye

    Hi I am having problems running debootstrap with the bullseye: Any clues? Processing triggers for initramfs-tools (0.140) ... update-initramfs: Generating /boot/initrd.img-5.11.22-5-pve Running hook script 'zz-proxmox-boot'.. Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new...
  18. F

    virtio-net-pci with vlan on vmbr

    Thanks I got it working with bridges. The Palo Alto VMs are a little picky on how it assign management and dataplane networking. But I made a mistake on the Palo Alto VM firewall rules, so bridges work :) With PCI-passthrou the NICs need to come in by a PCI slot order and manegement network...
  19. F

    virtio-net-pci with vlan on vmbr

    Hi I have a VM where I need to add interfaces as virtio-net-pci via args. Can I somehow add it with a specific VLAN tagging? - or should I create a vmbr for each vlan? args: -device virtio-net-pci,netdev=net0,mac=xx:xx:xx:xx:xx:xx,addr=04.0 -netdev tap,id=net0,br=vmbr0 Thanks Felix
  20. F

    Controlling PCI Slot order in Proxmox 5.4

    Hi I am also trying to get a PA-VM running on my Proxmox v7.x If I just add one vmbr I can get management networking up and running. But when I add the SFP+ Nic with PCI-Passthroug things stop workring. trch: Can I see your full qemu-server conf file? Thanks Felix