Ok, I have tested with iothread, and I have problem , migration crashing or qemu process crashing.
Also with only 1disk.
So it seem that qemu is buggy currently, for drive-mirror + livemigration at the same time when iothread is enabled.
https://bugzilla.redhat.com/show_bug.cgi?id=1539530
yes, it should work.
maybe a little bit overkill.
you can give a try at
https://www.openattic.org/
or wait for next ceph release (mimic), which should have integrated dashboards with management (create/delete/update).
you don't need to define vlan interfaces in /etc/network/interfaces
if you define vlan tag in vm configuration, proxmox will create the bond0.vlan interface and a vmbr0v[vlan] bridge for you.
Hi, I'm currently working on implementation on vxlan + bgp evpn. This should give us something like vmware nsx. (with anycast gateway on proxmox host). This will work with linux bridge.
I'll try to send patches next month.
note that since luminous + bluestore, jemalloc don't work well. (because of rocksdb)
Ceph devs said that tcmalloc if fine now, since they have switched to async messenger.
if you are concerned about dataloss, cache=none.
rbd_cache is 32mb (can be tuned), so even with fsync you can loose 32mb. (but you'll don't have filesystem corruption).
@David : do you have tried with a bigger file size ? (as it's random write, with a small file, you have more chance to have 2 block near each others, so writeback is usefull is this case).
if you enable cache=writeback on vm, it'll enable rbd_cache=true.
ceph have a feature by default
rbd cache writethrough until flush = true.
That mean that it's waiting to receive a first fsync, before really enable writeback. So you are safe to enable writeback.
Writeback is helping for...
The problem is coming from network latency + ceph latency. If you copy 1 file, sequentially and with small blocks, it's iodepth=1. (same with dd command for example).
For each block, you'll have your network latency (0,1ms for example), you'll be able to do 10000 iops.
if you do it with 4k...
Hi,
you can also use this external script, to backup with rbd snapshot, rbd-diff export feature.
https://github.com/EnterpriseVE/eve4pve-barc
Works very fine and a lot faster.
Hi,
I just see your message on https://www.frsag.org/pipermail/frsag/2018-January/009207.html,
I have sent patch to fix it recently, and I should be fixed with last proxmox updates :)
https://git.proxmox.com/?p=qemu-server.git;a=commit;h=87955688fda3f11440b7bc292e22409d22d8112f
Sorry, but that just mean that this specific poc only work on this "outdated" 4.9 kernel. That doesn't mean that's impossible to do the same on lasts kernels. (But yes, it's very difficult to exploit, but not impossible)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.