Search results

  1. D

    [SOLVED] Very poor LXC network performance

    I have this issue on PVE 8.3.4, but only when using multiple virtio network cards attached to the LXC container, and only while transfering files via SAMBA through a WIndows VM. I don't have the issue when using Linux VMs.
  2. D

    Windows VMs transfer speeds drop to 0 to/from HDD zfs mirrored pool

    So far, it's got something to do with multiple networks on the same LXC container (I'm using several VLANs and bridges). The logs don't tell anything meaningful and it's fairly easy to reproduce. When I remove the network interfaces from the config, or use "disconnect" from the GUI, the speeds...
  3. D

    Windows VMs transfer speeds drop to 0 to/from HDD zfs mirrored pool

    It seems that creating a sata disk (instead of scsi or virtio) stored on the tank pool (HDD-based zfs mirror) eliminates the hangs when copying files between different disks inside the same Windows VM. In scenario (1), (which is copying files from a SAMBA server from an LXC container with bind...
  4. D

    Windows VMs transfer speeds drop to 0 to/from HDD zfs mirrored pool

    Thanks for the suggestions. I've tried both the newest virtio drivers and several older ones. The dirty_cache thing doesn't do anything and it also doesn't apply in my case. I don't have issues with transfers between SSDs, I only have them while transfering to/from HDD-based zfs pools. I'll...
  5. D

    Windows VMs transfer speeds drop to 0 to/from HDD zfs mirrored pool

    Hello, When I'm using Windows 10/11 VMs, stored on an SSD zfs mirror pool, to transfer files to/from zfs HDD mirrored pool on the same Host, speeds frequently drop to 0 and hang there for a few minutes. I don't have this issue with Ubuntu and linux VMs so I'm guessing it has something to do...
  6. D

    Looks like Proxmox 8 is generating new IPv6 DUID for every reboot

    @ShadowDrake here's my /etc/network/interfaces file: auto lo iface lo inet loopback iface enp7s0 inet manual auto vmbr0 iface vmbr0 inet static address 192.168.xxx.xxx/24 gateway 192.168.xxx.xxx bridge-ports enp7s0 bridge-stp off bridge-fd 0 post-up echo 2 >...
  7. D

    Looks like Proxmox 8 is generating new IPv6 DUID for every reboot

    I've encountered this issue. When I set iface vmbr0 inet6 dhcp the sistem hangs until it gets an ipv6 address. To test this, I've stopped the dhcpv6 server and radvd and the host wouldn't finish the bootup. After I turn on dhcpv6 and radvd, proxmox finishes the bootup sequence. Replicated this...
  8. D

    Nested virtualization crashes guest when it's being actively used

    Did you find a solution? I have this same issue. After I turn on Virtual Machine Platform, the guest won't boot unless I switch to kvm64. CPU type host no longer works - windows tries auto repair and fails . I'm trying to run Windows Subsystem for Android.
  9. D

    Proxmox Host IPv6 DHCP not working

    Thanks for the pointers. I'll try and troubleshoot what's with ifupdown2 not working. I'll report if I find something. Edit: I've tried using a cronjob to start dhclient after boot, but it doesn't pickup an IPv6 prefix change as well as ifupdown1. Most likely I don't know how to build a proper...
  10. D

    Proxmox Host IPv6 DHCP not working

    I've removed ifupdown2 and it works now... I'd like to keep it though, it helps with applying network configs via the GUI AFAIK
  11. D

    Proxmox Host IPv6 DHCP not working

    nothing much. This is the only relevant thing I could find: no link-local IPv6 address for vmbr0 Although I can see a link local address with ip a
  12. D

    Proxmox Host IPv6 DHCP not working

    I've installed ifupdown2. From what I'm seeing here, it looks like dhclient doesn't start on boot.
  13. D

    Proxmox Host IPv6 DHCP not working

    auto AFAIK means SLAAC right? But I've setup a DHCPv6 server via opnsense which works for VMs under proxmox, but not proxmox itself. When I use dhclient -6 vmbr0 it picks the static ipv6 address I've set in opnsense.
  14. D

    Proxmox Host IPv6 DHCP not working

    I have this same issue. I've tried everyhting dhcpv6 related in this forum and nothing worked. I've iface vmbr0 inet6 dhcp set in /etc/network/interfaces but I'm not getting anything. When I run dhclient -6 vmbr0 I'm getting an IP address. Any ideas as to why? Other hosts seem to work fine.
  15. D

    CPU KVM64 much slower networking than Host

    I've improved the speeds on the R720 but my issue isn't fully solved. I'd probably have to spend a lot more time on this to get the full 1 Gbps speed and I'm momentarily putting this thing to rest. First off, the network card wasn't fully up to date. It was the only firmware patch that didn't...
  16. D

    CPU KVM64 much slower networking than Host

    I ran all sorts of tests and I can't get to the bottom of this. No matter what I do, I can't get more than 600 Mbps when pfsense is virtualized. Traffic on the bridge works at about 400 MB/s when on the same VLAN and at ~ 120 MB/s across VLANs (when it's going thorugh pfs VM). Incidentally...
  17. D

    CPU KVM64 much slower networking than Host

    @tburger I did a passthrough of one of R720's Intel NICs to the pfsense VM and the results were about the same. Passing through directly to a Debian/Win 10 VM yielded higher speeds. @aaron I've disabled offloading in the pfsense VM. Shouldn't specter flags help VMs? I've tried with/without...
  18. D

    CPU KVM64 much slower networking than Host

    I understand, but 1. The sg 2440 has an Intel Atom @ 1.8 Ghz x 2 cores and it can do higher speeds than ~ 6 cores on a VM on the E5-2660 2. Linux bridge seemed to yield higher performance That multi-queueing tip is good. I forgot I had that enabled on the E3. I'll run some more tests.
  19. D

    CPU KVM64 much slower networking than Host

    Well, aes and specter mitigation flags were already set. Anyway, I tried different options and something doesn’t seem right. I’ve virtualized pfsense for gigabit throughput but I have to allocate more than 6 cores. Then, using linux bridge instead of OVS seemed to improve throughput. I’ve...
  20. D

    CPU KVM64 much slower networking than Host

    Hi, I'm setting up a new Proxmox machine and I've noticed something weird. Proxmox 6.1-3 fresh install + openvswitch Host: Dell R720 | 2 x Intel Xeon E5-2660 (updated Microcode) | 4 port I350 network card; VMs: Windows and Debian, virtio everything; Network speedtests in both VMs under CPU...