Search results

  1. A

    Recommended Max Ceph Disks / Nodes

    yes, for low iodepth / small block workload, better to have big frequencies to speedup thing. if you are only doing big block workload, don't matter too much.
  2. A

    [SOLVED] 'guest-fsfreeze-freeze' failed - got timeout

    do you have enable guest agent, and is guest-agent service running in your vm ?
  3. A

    Intel DPDK Support for OpenVSwitch

    I'm currently looking to add vhost-user support, it's a missing part too. After, maybe able to add some kind of custom network plugin could be great (tap_plug/ unplug). I'm also looking to add support to 6wind virtual accelerator (commercial, based on dpdk, but working with ovs and linux...
  4. A

    Live Migration without a cluster?

    I had sent some patches on proxmox mailing list some time ago to do it, but I never had time to cleanup them. (live migration + live storage migration across servers in differents cluster (or without cluster). But I need to rebase them to get them work on last proxmox git.
  5. A

    Proxmox 5 & Ceph Luminous/Bluestore super slow!?

    also add on your ceph.conf clients: [global] debug asok = 0/0 debug auth = 0/0 debug buffer = 0/0 debug client = 0/0 debug context = 0/0 debug crush = 0/0 debug filer = 0/0 debug filestore = 0/0 debug finisher = 0/0 debug heartbeatmap = 0/0 debug journal = 0/0 debug journaler = 0/0...
  6. A

    Recommended Max Ceph Disks / Nodes

    see: http://docs.ceph.com/docs/jewel/start/hardware-recommendations/ ceph doc said something like -2 osd/ 1core+hyperthreading. - 2gb ram by osd better to have 1 disk controller for 8 disks. (avoid oversubscribing dataplane) then if you need fast latency/ a lot of small random iops, try...
  7. A

    Best setup for 4xSSD RAID10

    if you don't need zfs feature like replication to another server, I'll go to hardware raid10 (without cache) + lvm-thin for snapshots.
  8. A

    Blue screen with 5.1

    what is your physical cpu model ?
  9. A

    Blue screen with 5.1

    interessting, with core2 I found this note on centos " Limited CPU support for Windows 10 and Windows Server 2016 guests On a Red Hat Enterprise 6 host, Windows 10 and Windows Server 2016 guests can only be created when using the following CPU models: * the Intel Xeon E series * the Intel...
  10. A

    Blue screen with 5.1

    from microsoft support : https://support.microsoft.com/en-ph/help/2902739/stop-error-0x109-critical-structure-corruption-on-a-vmware-virtual-mac seem to be related to virtual cpu flags. maybe a regression in kvm, or a new flag sent. what is your vm cpu model ? kvm64 ? host ? something else ?
  11. A

    Windows Guest hangs during Backup

    maybe for your windows case (don't known which driver version do you use): " Latest latest virtio driver (network) for Windows drops lots of packets" https://bugzilla.redhat.com/show_bug.cgi?id=1451978 Peixiu Hou 2017-07-06 01:16:17 EDT Reproduced this issue with virtio-win-prewhql-139, the...
  12. A

    Proxmox VE 5.1 and Ceph Filesystem

    it's active/backup , for disaster recovery for example. you have vms on DC1 with ceph1 , and mirroring to DC2 with ceph2 (standby). it's per pool, so it's possible to do dual active-backup with 2 pools, with vms running on their master pool on each side.
  13. A

    Proxmox VE 5.1 and Ceph Filesystem

    for rbd, you can use rbd mirror with async replication to another ceph cluster for radosgw, you can mirror objets to a remote ceph cluster but for cephfs, they a no async replication currently. (I have they are rados async replication on the ceph roadmap), but currently it's done client side...
  14. A

    Ceph Performance

    the main problem with the move disk option, is that qemu is moving sequentially with small 4 blocks. you can reduce latency by disabling cephx auth, also disable all debug in ceph.conf (on ceph nodes, but also client node) [global] debug asok = 0/0 debug auth = 0/0 debug buffer = 0/0...
  15. A

    Ceph Cluster

    you can install ceph on specifics nodes (3 nodes for monitors minimum, osds could be on 2 nodes only with size=2). but you need to install ceph packages on others (only package (pveceph install), not creating deamons (pveceph create...), to install packages to manage ceph
  16. A

    Ceph Performance

    AFAIK, the move disk option, move block by block of 4K., and sequentially. so it'll be not faster than 1 disk write + network latency. I'm not sure that journal for write is helping too much here. Is the source drive configured as writeback ? it could help for migrate to target ceph as...
  17. A

    Backup performance issues

    I don't think that ionice is working with zfs. (as zfs has his own io scheduler). AFAIK, ionice only work with cfq scheduler. (and proxmox use deadline by default)
  18. A

    Backup performance issues

    For qemu, vzdump backup blocks, not files, so no impact here if you have millions a small files for lxc, indeed, a lot of files can slowdown backup
  19. A

    Backup performance issues

    What do you mean by stable performance ? if the MB/s is different, it's because of sparse % (zeroes block), so it's normal that it's faster. Last update increase block size for backup. (I think it was 4K before, and now 64k or 128k, not sure) is it faster if you backup on a local storage ?
  20. A

    Proxmox 5.0: Add Ceph RBD (external) running Ceph jewel or hammer

    Normally, ceph client should be backward compatible, but I'm not sure that ceph devs test all version. externe Jewel is working fine, with librbd jewel or luminious on proxmox 5. Don't have hammer to test. maybe try to ask to the ceph dev mailing list ? it could be a bug