maybe for your windows case (don't known which driver version do you use):
" Latest latest virtio driver (network) for Windows drops lots of packets"
https://bugzilla.redhat.com/show_bug.cgi?id=1451978
Peixiu Hou 2017-07-06 01:16:17 EDT
Reproduced this issue with virtio-win-prewhql-139, the...
it's active/backup , for disaster recovery for example.
you have vms on DC1 with ceph1 , and mirroring to DC2 with ceph2 (standby).
it's per pool, so it's possible to do dual active-backup with 2 pools, with vms running on their master pool on each side.
for rbd, you can use rbd mirror with async replication to another ceph cluster
for radosgw, you can mirror objets to a remote ceph cluster
but for cephfs, they a no async replication currently.
(I have they are rados async replication on the ceph roadmap), but currently it's done client side...
the main problem with the move disk option, is that qemu is moving sequentially with small 4 blocks.
you can reduce latency by disabling cephx auth, also disable all debug in ceph.conf (on ceph nodes, but also client node)
[global]
debug asok = 0/0
debug auth = 0/0
debug buffer = 0/0...
you can install ceph on specifics nodes (3 nodes for monitors minimum, osds could be on 2 nodes only with size=2).
but you need to install ceph packages on others (only package (pveceph install), not creating deamons (pveceph create...), to install packages to manage ceph
AFAIK, the move disk option, move block by block of 4K., and sequentially. so it'll be not faster than 1 disk write + network latency.
I'm not sure that journal for write is helping too much here.
Is the source drive configured as writeback ? it could help for migrate to target ceph as...
I don't think that ionice is working with zfs. (as zfs has his own io scheduler).
AFAIK, ionice only work with cfq scheduler. (and proxmox use deadline by default)
What do you mean by stable performance ?
if the MB/s is different, it's because of sparse % (zeroes block), so it's normal that it's faster.
Last update increase block size for backup. (I think it was 4K before, and now 64k or 128k, not sure)
is it faster if you backup on a local storage ?
Normally, ceph client should be backward compatible, but I'm not sure that ceph devs test all version.
externe Jewel is working fine, with librbd jewel or luminious on proxmox 5.
Don't have hammer to test.
maybe try to ask to the ceph dev mailing list ? it could be a bug
I think it can be done, using aliases, and a cron script which run each minute, do a dns lookup and update the ip of the alias.
(BTW, IPV6 firewall is fully supported in proxmox since v4)
I don't recommend a cluster file system, for hosting vm images. (too much overhead, lock contention,...)
Use ceph rbd block device (not cephfs) , best storage ever !
(running 100TB ssd cluster , never had failure or hang since 3 years now)
maybe problem is different. (different storage,network configuration).
maybe you can try to install pve-kernel 4.4 , test. and if it don't work, test pve-qemu-kvm 2.7 from proxmox4. (I have posted link in previous posts).
when backup is running, if a new write is coming on a block not yet backuped, the block is first written to the backup storage before acking the new write. (it's only occur at first write on this block).
if you have a lot of writes, on differents block, and you have a slow backup storage or...
do you have
qemu-server: 4.0-112
on both nodes ?
mainly for this fix:
https://git.proxmox.com/?p=qemu-server.git;a=commit;h=b2e4d3982fe7f6a413852c6ef97814907dbb8fea
if with kernel 4.4 + pve-qemu-kvm 2.7 both from proxmox 4 on proxmox 5, you still have the problem, I really don't understand what it can be ....
Are you 100% sure that you don't have the same problem with proxmox 4 on this specific server ?
I have theses 3 packages installed, but It was on a proxmox5 after upgrade from proxmox4.
they are coming from jessie package.
You can install them safely, no conflict with newer packages
http://ftp.us.debian.org/debian/pool/main/g/gnutls28/libgnutls-deb0-28_3.3.8-6+deb8u7_amd64.deb...
you can mix proxmox 3 && 4 nodes, because of corosync2 in proxmox4, not compatible with corosync1 in proxmox3.
If you have only vm, and if you can have downtime, simply upgrade to proxmox4, and reboot all nodes, you'll not loose your config.
It's possible to do it without downtime and live...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.