Search results

  1. B

    pfSense VM - very slow network throughput

    I tested/compared for you. On an Intel NUC with i5, I don't have a VM so i cannot test identical. But i do have an LXC container. LXC -> LXC = 38Gbit/sec PFsense VM -> LXC = 2.95Gbit/sec Stepping it up to my big boy environment; A dedicated PFsense machine with a Xeon E-2236 and as Proxmox...
  2. B

    pfSense VM - very slow network throughput

    Virtualisation overhead. You're not going to get linespeed. What speed are you getting between VM's over a bridge?
  3. B

    pfSense VM - very slow network throughput

    You are not NAT'ing the way you are testing.
  4. B

    pfSense VM - very slow network throughput

    Like Gabriel mentioned, everything is running on CPU. Not a specialized ASIC packet handler. Don't expect to reach multi gbit speeds easily. Even my fastest pfsense box which is not virtualized "only" does about 5.5Gbit/sec sustained (Xeon E-2236, Chelsio T520-CR)
  5. B

    pfSense VM - very slow network throughput

    Allright. Issue found! On the original 2.5 setup, there was a traffic limiter of 300/10 configured on a guest VLAN interface. Worked as designed. Somewhere during the update to 2.6, this limiter somehow shifted to the WAN interface! I'm saying somehow because I don't know how. The limiter was...
  6. B

    pfSense VM - very slow network throughput

    Good point on the bios. The system had a 1000 days uptime so the bios is at least that old. I'll give the update a go. Both systems originally started on Pfsense 2.5 and have upgraded them to 2.6 and 2.7 along the way. Never noticed the issue on 2.5. Once we noticed the speed problem, we...
  7. B

    pfSense VM - very slow network throughput

    I have 2 identical scenarios. Intel NUC Celeron 2820 -> Proxmox8 -> PFsense 2.7 = OK. I reach the speed of the WAN connection (100/30mbit) Intel NUC i5 8259U -> Proxmox8 -> PFsense 2.7 = Not-OK. WAN is 1000/100 but only reaching 270/9mbit when traffic is NAT'ed Nothing is different between...
  8. B

    Proxmox pfsense vm extreme low throughput

    Did not help for me on PFsense. 900/100mbit internet line 9xx iperf test from LAN to PFsense VM However, NAT performance horrible at 270mbit down, 10mbit up
  9. B

    pfSense VM - very slow network throughput

    Also running into this issue. It seems it happens since PFsense 2.6. Same problem in 2.7. 2.5 was fine. iperf over LAN to pfsense = gbit internet directly on the ISP router but through my network/vlan = 900down/100up internet behind PFsense NAT = ~270down/9up Have not found the problem yet...
  10. B

    LXC running docker fails to start after upgrade from 6.4 to 7.0

    Found this topic again after trying to install Elastiflow via docker-compose in LXC so there are deff unresolved issues. Nesting is on in my container. root@elastiflow:~# docker-compose up -d Pulling flow-collector (elastiflow/flow-collector:5.3.4)... 5.3.4: Pulling from...
  11. B

    pve ceph basic install with no extra configurations - performance question

    What is your workload? Ceph is not meant for small random io. Specially won't perform in a small 4 node cluster with spinning disks. It's meant for large clusters (12 nodes+, 12 OSD's+ per node) Also Mikrotik sucks. Your LACP bundle performance is probably suffering.
  12. B

    LXC running docker fails to start after upgrade from 6.4 to 7.0

    This problem still exists on Proxmox 7.1-8 with a Debian 11 container, trying to run Librenms via docker-compose Latest docker-ce from thier repo (5:20.10.11~3-0~debian-bullseye) and docker-compose (1.25.0-1)
  13. B

    High KVM CPU load when VM on ZFS

    With L1ARC: fio: (groupid=0, jobs=8): err= 0: pid=2351: Fri Dec 3 09:55:29 2021 write: IOPS=8595, BW=8595MiB/s (9013MB/s)(504GiB/60001msec); 0 zone resets slat (usec): min=102, max=83658, avg=604.06, stdev=1098.89 clat (usec): min=2, max=125231, avg=28851.81, stdev=11989.98 lat...
  14. B

    High KVM CPU load when VM on ZFS

    Thanks! That explains. What options do you recommend for some more realistic linear and random read/write performance like CrystalDiskMark or Atto does? As for ZFS and the CPU load. Sofar I think that overhead that ZFS requires (checksumming) simple necks the CPU. However, when enabling the...
  15. B

    High KVM CPU load when VM on ZFS

    What I also don't understand is why this command, ran on the proxmox host zfs pool directy only does about 35MB/sec fio --ioengine=libaio --filename=/pool/test --direct=1 --sync=1 --size=1GB--rw=write --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name=fio
  16. B

    High KVM CPU load when VM on ZFS

    Hi Guys, I've setup one of my new servers today for some testing. Xeon Silver CPU 256GB DDR4 Samsung 1733 NVME drives As long as the VM is running on local lvm-thin type storage, the disk performance in the VM is blazing fast (7GB/sec read, 4GB/sec write) with low cpu load. As soon as I run...
  17. B

    Slower networks; iperf retr

    I might have already answered my own question; https://forum.mikrotik.com/viewtopic.php?t=128843 https://forum.mikrotik.com/viewtopic.php?t=137267
  18. B

    Slower networks; iperf retr

    Hi Guys, I am looking into some network issues i am having. On the daily use, no issues are noticable but when performing speedtests, I can clearly see speeds are dropping down to 300mbit/sec on gbit lines. Windows VM(virtio 10G NIC) to Windows VM(virtio 10G NIC) routed via PFsense VM (10G) on...
  19. B

    Low filetransfer performance in Linux Guest (ceph storage)

    The only down side to consumer SSD's as far as i've noticed is the low replication performance when Ceph needs to move data around. Regarding to power failures: Chances are slim (dual feed, UPS etc) and the MX500 SSD's have some more inteligence regarding sudden power loss. This cluster not...
  20. B

    Low filetransfer performance in Linux Guest (ceph storage)

    I understand your reaction and agree that that is the best practice to get the most performance out of Ceph and that the current solution is not optimal. However, i don't think this is the reason for the performance hit to just 500KB/sec. Especially not since all other benchmarks show pretty...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!