Search results

  1. S

    Networking SDN Roadmap

    The main problem currently is that ipam is not fully implemented yet, and openflow controller like faucet need to manually register ips at vm start. That's why evpn is implemented currently, dynamic learning of ip/mac , standard interopability with physical switchs (arista, cisco, ...)...
  2. S

    Networking SDN Roadmap

    I'm working with a proxmox user to add support of ovs ovn. But Personnally, I think that openflow is already dead 5~8 years ago. (Almost Every commercial openflow controller is dead ) Do you use a specific openflow controller in production ?
  3. S

    High CPU IO during DD testing on VMs

    do you have enabled compression on your zfs pool ?
  4. S

    Migrating ESXi vSwitch0 to Proxmox Cluster layer 2

    the vmbr0.X are vlan interfaces not bridge, so you can't plug vm on it. so you need to use your vmbrx. (Or use sdn vnet, which is doing the same, but you don't need to define it host by host). Another possiblity, simply defined vlan tag on vm nic, and use the main vmbr10 bridge.
  5. S

    Migrating ESXi vSwitch0 to Proxmox Cluster layer 2

    Hi, the vlan part seem to be ok. (You could also use sdn at datacenter level to do the same, using a vlan zone with a vnet instead vmbr7, It's construct exactly same config) I'm not sure about "bridge-ports ens2f0 ens2f1" you should group them in a bond, or you could have a network loop...
  6. S

    What about LVM+qcow2

    Hi, I have also looked at this. I'll try it soon, and compare vs gfs2 && ocfs2. The only diff vs ovirt, is that proxmox use internal snapshot, I don't known it's playing fine with qcow2 on blockdevice. (That's also mean than we can't shrink blockdevice after delete of the snapshot)...
  7. S

    100G Mellanox Connect-5 on AMD Epyc 7302P

    iperf3 is not multithreaded (until recently before 3.16), so maybe you could be core limited. (Try to use iperf2 with -P option or launch multiple iperf3 in //) and try to increase windows size, to be sure to not be pps limited.
  8. S

    After update to 8.2.4 all nodes in cluster going grey

    try to disable storage ,1 by 1. Maybe 1 speficic storage is blocking the pvestatd daemon.
  9. S

    SDN problems with Netbox as IPAM

    Sorry, I didn't have time. I will try next week.
  10. S

    KSM & Ballooning Question

    ballooning is not related to swap, so you can safely disable swap on host && guests vm. ksm only use 1core max. (generally at 100% , but you can increase sleep value in ksm config file)
  11. S

    Backup fleecing

    I'm also currently testing it with with ceph as fleecing storage, with the vm on the same rbd storage too. (full nvme cluster) (backup server is long distance with hdd) So far, no problem, no crash. (So, I think It'll be able to finally migrate to PBS )
  12. S

    SDN VLAN aware vs VXLAN

    yes. (the sdn vlan is defined at datacenter, then deploy/generated on each node locally. it's not mandatory, you can keep it empty. (It's for ipam, to auto register your hostname/ip in your dns server.
  13. S

    SDN VLAN aware vs VXLAN

    vlan-aware allow you to create an extra vlan tag (at vm nic level), on top of the vxlan tunnel. (vlan over vxlan) (you don't really need it until you have a very specific setup)
  14. S

    Hypervisor Showdown: Performance of Leading Virtualization Solutions

    It's impossible that openssl is so slow if they use the default x86-64-v2-AES model. Or it's a benchmark of pve7 with old kvm64, but I don't see why they don't test pve8 in this case.... (as pve7 is eol next month)
  15. S

    Migrating ESXi 6.5 VM to Proxmox keeping current Intel E1000 NIC

    windows use pci slot position to detect nics. the pci slot is different between proxmox && vmware, so no, it's ompossible. (BTW, you should use virtio-net nic and install virtio driver)
  16. S

    SDN VLAN speed performance (proxmox8.2.4) my bad you can delete this post

    you have change the ip on 1 server only ? if yes, the traffic is going through your gateway,as they are different subnets
  17. S

    latency on ceph ssd

    see this old blog: https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/ The main reason is sync write, without plp, the ssd need to write directly to a full nand. for example, you need to write 4k, it'll write a full nand of 32MB (size of...
  18. S

    [SOLVED] SDN traffic allowed

    check your mtu. (if your physical network is 1500, you should have 1450 maximum for your vm)
  19. S

    latency on ceph ssd

    don't use consumer ssd with ceph or zfs. you need PLP to handle sync write

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!