Search results

  1. A

    Wrong CPU cache stats in all VM's

    seem to be hardcoded in qemu https://git.qemu.org/?p=qemu.git;a=blob_plain;f=target/i386/cpu.c;hb=HEAD #define L2_SETS 4096 #define L3_N_SETS 16384
  2. A

    HA and different datacenter + IPs, how that works?

    if you manage your own network (routers/bgp), you can change bgp annonce to failover ips to another datacenter. if you don't manage your network, for web, you can deploy reverse proxy on each datacenter and an external loadbalancer (cloudflare for example), then manage your vm ips in a private...
  3. A

    Dell Openmanage VE 5.0

    on my side, I have "omconfig" correctly installed /opt/dell/srvadmin/bin/omconfig (apt-get install srvadmin-all)
  4. A

    Dell Openmanage VE 5.0

    use the openmanage jessie repo and install the 2 missing debs manually.
  5. A

    Two cluster 4x and 5x in one network

    yes, no problem. Just don't setup same clustername. (because multicast address is compute from clustername)
  6. A

    Dell Openmanage VE 5.0

    you need 2 packages from jessie to get openmanage 8.4 working on stretch currently. wget http://ftp.us.debian.org/debian/pool/main/o/openslp-dfsg/libslp1_1.2.1-10+deb8u1_amd64.deb dpkg -i libslp1_1.2.1-10+deb8u1_amd64.deb wget...
  7. A

    PVE 5.1: KVM broken on old CPUs

    https://git.proxmox.com/?p=pve-kernel.git;a=summary 25 hours ago Fabian Grünbichler bump version to 4.13-28, bump ABI to 4.13.8-2-pve master commit | commitdiff | tree | snapshot 25 hours ago Fabian Grünbichler revert mmu changes causing bluescreens commit | commitdiff | tree | snapshot so...
  8. A

    Blue screen with 5.1

    It'll reduce the performance a little bit, but not too much. (virtio devices are still accelerated). I'll try to apply patches on qemu tomorrow and build a deb. if you have time to test it, it could be great.
  9. A

    Blue screen with 5.1

    any news for today ? just wonder if this qemu fix for hyperv and msr could help https://lists.gnu.org/archive/html/qemu-devel/2017-11/msg04221.html
  10. A

    Blue screen with 5.1

    yes you can change it after install. it's just add or remove some hyperv cpu flags to qemu command line.
  11. A

    Blue screen with 5.1

    @cybermcm : Does it crash too if you choose win2003 as windows version as os version (for you win10 guest). This disable hyperv features that qemu support. I'll like to known if it's related to hyperv or not.
  12. A

    CEPH performance

    the problem with dd as benchmark, is that is like iodeph=1 and sequential. so you'll be limited by the latency (network + cpu frenquency). with 18 osd ssd, replication x3 , big 2x12 cores 3.1ghz cpu, I'm able to reach around 700k iops randread 4K, and 150-200k randwrite 4K. (fio, iodepth=64...
  13. A

    CEPH performance

    try to add in your ceph.conf it should improve performance. They are also a bug in current luminous with debug ms, and it'll be set to 0/0 by default in next ceph release. also for rados bench, try "-t 64", to increase number of threads (16 by default)
  14. A

    Limit Ceph Luminous RAM usage

    They are a bug in bluestore currently, on memory accouting. It'll be fixed in 12.2.2 http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021676.html
  15. A

    Blue screen with 5.1

    don't known if it's related, but new qemu patches have been sent on qemu-devel, with fix for hyperv and msr. https://lists.gnu.org/archive/html/qemu-devel/2017-11/msg04221.html
  16. A

    [SOLVED] PVE 5 Live migration downtime degradation (2-4 sec)

    --with-local-disks on 4.0 was buggy, that's why it was fast. (you could have block corruptions). If this is slow, it's because it need to flush pending write before finish the migration. Depend of your storage speed, it can take some time. (I have done test with ssd and 10gbe network, it take...
  17. A

    Hotplug CPU

    hotplugping is on vcpus. you define cores*sockets,which is the topology and maximum vcpus you can hotplug. then you should be able to hotplug/unplug vcpus. the 80-hotplug-mem.rules need to be in the guest vm.
  18. A

    Proxmox 5 & Ceph Luminous/Bluestore super slow!?

    Hi, I'm running jewel filestore and luminous bluestore, with 3 nodes - each : 6 osd ssd - 2xintel 3ghz 12cores - 64g ram - 2x10gb ethernet . I don't have see any regression. I'm around 600000 iops randread 4k, 150000 iops randwrite 4k.
  19. A

    Storage replication question.

    you can configure your cluster to use unicast, it should work for small cluster. (depend on latency)