The main problem currently is that ipam is not fully implemented yet, and openflow controller like faucet need to manually register ips at vm start.
That's why evpn is implemented currently, dynamic learning of ip/mac , standard interopability with physical switchs (arista, cisco, ...)...
I'm working with a proxmox user to add support of ovs ovn.
But Personnally, I think that openflow is already dead 5~8 years ago. (Almost Every commercial openflow controller is dead )
Do you use a specific openflow controller in production ?
the vmbr0.X are vlan interfaces not bridge, so you can't plug vm on it.
so you need to use your vmbrx. (Or use sdn vnet, which is doing the same, but you don't need to define it host by host).
Another possiblity, simply defined vlan tag on vm nic, and use the main vmbr10 bridge.
Hi,
the vlan part seem to be ok. (You could also use sdn at datacenter level to do the same, using a vlan zone with a vnet instead vmbr7, It's construct exactly same config)
I'm not sure about
"bridge-ports ens2f0 ens2f1"
you should group them in a bond, or you could have a network loop...
Hi, I have also looked at this.
I'll try it soon, and compare vs gfs2 && ocfs2.
The only diff vs ovirt, is that proxmox use internal snapshot, I don't known it's playing fine with qcow2 on blockdevice. (That's also mean than we can't shrink blockdevice after delete of the snapshot)...
iperf3 is not multithreaded (until recently before 3.16), so maybe you could be core limited.
(Try to use iperf2 with -P option or launch multiple iperf3 in //)
and try to increase windows size, to be sure to not be pps limited.
ballooning is not related to swap, so you can safely disable swap on host && guests vm.
ksm only use 1core max. (generally at 100% , but you can increase sleep value in ksm config file)
I'm also currently testing it with with ceph as fleecing storage, with the vm on the same rbd storage too. (full nvme cluster)
(backup server is long distance with hdd)
So far, no problem, no crash. (So, I think It'll be able to finally migrate to PBS )
yes. (the sdn vlan is defined at datacenter, then deploy/generated on each node locally.
it's not mandatory, you can keep it empty. (It's for ipam, to auto register your hostname/ip in your dns server.
vlan-aware allow you to create an extra vlan tag (at vm nic level), on top of the vxlan tunnel.
(vlan over vxlan)
(you don't really need it until you have a very specific setup)
It's impossible that openssl is so slow if they use the default x86-64-v2-AES model.
Or it's a benchmark of pve7 with old kvm64, but I don't see why they don't test pve8 in this case.... (as pve7 is eol next month)
windows use pci slot position to detect nics.
the pci slot is different between proxmox && vmware, so no, it's ompossible.
(BTW, you should use virtio-net nic and install virtio driver)
see this old blog:
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
The main reason is sync write, without plp, the ssd need to write directly to a full nand.
for example, you need to write 4k, it'll write a full nand of 32MB (size of...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.