Search results

  1. A

    Blue screen with 5.1

    @cybermcm : Does it crash too if you choose win2003 as windows version as os version (for you win10 guest). This disable hyperv features that qemu support. I'll like to known if it's related to hyperv or not.
  2. A

    CEPH performance

    the problem with dd as benchmark, is that is like iodeph=1 and sequential. so you'll be limited by the latency (network + cpu frenquency). with 18 osd ssd, replication x3 , big 2x12 cores 3.1ghz cpu, I'm able to reach around 700k iops randread 4K, and 150-200k randwrite 4K. (fio, iodepth=64...
  3. A

    CEPH performance

    try to add in your ceph.conf it should improve performance. They are also a bug in current luminous with debug ms, and it'll be set to 0/0 by default in next ceph release. also for rados bench, try "-t 64", to increase number of threads (16 by default)
  4. A

    Limit Ceph Luminous RAM usage

    They are a bug in bluestore currently, on memory accouting. It'll be fixed in 12.2.2 http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021676.html
  5. A

    Blue screen with 5.1

    don't known if it's related, but new qemu patches have been sent on qemu-devel, with fix for hyperv and msr. https://lists.gnu.org/archive/html/qemu-devel/2017-11/msg04221.html
  6. A

    [SOLVED] PVE 5 Live migration downtime degradation (2-4 sec)

    --with-local-disks on 4.0 was buggy, that's why it was fast. (you could have block corruptions). If this is slow, it's because it need to flush pending write before finish the migration. Depend of your storage speed, it can take some time. (I have done test with ssd and 10gbe network, it take...
  7. A

    Hotplug CPU

    hotplugping is on vcpus. you define cores*sockets,which is the topology and maximum vcpus you can hotplug. then you should be able to hotplug/unplug vcpus. the 80-hotplug-mem.rules need to be in the guest vm.
  8. A

    Proxmox 5 & Ceph Luminous/Bluestore super slow!?

    Hi, I'm running jewel filestore and luminous bluestore, with 3 nodes - each : 6 osd ssd - 2xintel 3ghz 12cores - 64g ram - 2x10gb ethernet . I don't have see any regression. I'm around 600000 iops randread 4k, 150000 iops randwrite 4k.
  9. A

    Storage replication question.

    you can configure your cluster to use unicast, it should work for small cluster. (depend on latency)
  10. A

    Recommended Max Ceph Disks / Nodes

    yes, for low iodepth / small block workload, better to have big frequencies to speedup thing. if you are only doing big block workload, don't matter too much.
  11. A

    [SOLVED] 'guest-fsfreeze-freeze' failed - got timeout

    do you have enable guest agent, and is guest-agent service running in your vm ?
  12. A

    Intel DPDK Support for OpenVSwitch

    I'm currently looking to add vhost-user support, it's a missing part too. After, maybe able to add some kind of custom network plugin could be great (tap_plug/ unplug). I'm also looking to add support to 6wind virtual accelerator (commercial, based on dpdk, but working with ovs and linux...
  13. A

    Live Migration without a cluster?

    I had sent some patches on proxmox mailing list some time ago to do it, but I never had time to cleanup them. (live migration + live storage migration across servers in differents cluster (or without cluster). But I need to rebase them to get them work on last proxmox git.
  14. A

    Proxmox 5 & Ceph Luminous/Bluestore super slow!?

    also add on your ceph.conf clients: [global] debug asok = 0/0 debug auth = 0/0 debug buffer = 0/0 debug client = 0/0 debug context = 0/0 debug crush = 0/0 debug filer = 0/0 debug filestore = 0/0 debug finisher = 0/0 debug heartbeatmap = 0/0 debug journal = 0/0 debug journaler = 0/0...
  15. A

    Recommended Max Ceph Disks / Nodes

    see: http://docs.ceph.com/docs/jewel/start/hardware-recommendations/ ceph doc said something like -2 osd/ 1core+hyperthreading. - 2gb ram by osd better to have 1 disk controller for 8 disks. (avoid oversubscribing dataplane) then if you need fast latency/ a lot of small random iops, try...
  16. A

    Best setup for 4xSSD RAID10

    if you don't need zfs feature like replication to another server, I'll go to hardware raid10 (without cache) + lvm-thin for snapshots.
  17. A

    Blue screen with 5.1

    what is your physical cpu model ?
  18. A

    Blue screen with 5.1

    interessting, with core2 I found this note on centos " Limited CPU support for Windows 10 and Windows Server 2016 guests On a Red Hat Enterprise 6 host, Windows 10 and Windows Server 2016 guests can only be created when using the following CPU models: * the Intel Xeon E series * the Intel...
  19. A

    Blue screen with 5.1

    from microsoft support : https://support.microsoft.com/en-ph/help/2902739/stop-error-0x109-critical-structure-corruption-on-a-vmware-virtual-mac seem to be related to virtual cpu flags. maybe a regression in kvm, or a new flag sent. what is your vm cpu model ? kvm64 ? host ? something else ?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!