Hi, do you mean , access from the vm to the exit node ssh or webui ? If yes , this is expected as node services are running in a différent vrf. Exit node is only used as router
Before add it to gui,
I think we need to add a deny with iothread and multiple disks, as it's buggy.
also need to verify how it's works with zfs local replication. (maybe simply forbid it)
Ok, I have tested with iothread, and I have problem , migration crashing or qemu process crashing.
Also with only 1disk.
So it seem that qemu is buggy currently, for drive-mirror + livemigration at the same time when iothread is enabled.
https://bugzilla.redhat.com/show_bug.cgi?id=1539530
yes, it should work.
maybe a little bit overkill.
you can give a try at
https://www.openattic.org/
or wait for next ceph release (mimic), which should have integrated dashboards with management (create/delete/update).
you don't need to define vlan interfaces in /etc/network/interfaces
if you define vlan tag in vm configuration, proxmox will create the bond0.vlan interface and a vmbr0v[vlan] bridge for you.
Hi, I'm currently working on implementation on vxlan + bgp evpn. This should give us something like vmware nsx. (with anycast gateway on proxmox host). This will work with linux bridge.
I'll try to send patches next month.
note that since luminous + bluestore, jemalloc don't work well. (because of rocksdb)
Ceph devs said that tcmalloc if fine now, since they have switched to async messenger.
if you are concerned about dataloss, cache=none.
rbd_cache is 32mb (can be tuned), so even with fsync you can loose 32mb. (but you'll don't have filesystem corruption).
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.