Search results

  1. A

    > 3000 mSec Ping and packet drops with VirtIO under load

    I can't reproduce the problem from this thread, so I'm blind currently. Note that my pve-qemu-kvm package 2.9.1-1 is not the same than 2.9.1-1 from proxmox repo. (I have added the patch, but not changed the version number)
  2. A

    > 3000 mSec Ping and packet drops with VirtIO under load

    I mean, transparent huge can't impact boot. (maybe windows don't like switch between ide-> scsi, I really don't known). Transparent hugepage could impact performance only. BTW, I have build last pve-qemu-kvm with patch for@hansm bug. (which is virtio related, so maybe it could improve...
  3. A

    VM crash with memory hotplug

    @wolfgang , you are the master ! :) @hansm : Thanks for helping to debug this !
  4. A

    > 3000 mSec Ping and packet drops with VirtIO under load

    it's 100% unrelated. Note that if you change disk from ide->scsi, scsi->ide, you need to change boot drive each in vm option.
  5. A

    Proxmox 5 & Ceph Luminous/Bluestore super slow!?

    good to known ! is the difference really big ?
  6. A

    > 3000 mSec Ping and packet drops with VirtIO under load

    Hi, could you try to disable transparent hugepage on host ? it was disable in proxmox 4 before january 2017, 4.4.x kernel (don't remember exactly which version), and now set to madvise by default echo never > /sys/kernel/mm/transparent_hugepage/enabled echo never >...
  7. A

    VM crash with memory hotplug

    does it crash too if you start vm with more than 4g ?
  8. A

    VM crash with memory hotplug

    do you have same performance problem, without "--enable-jemalloc" ? we enable it mainly for ceph/librbd performance in qemu 2.4, just wonder if this new commit could change behaviour. this bugzilla https://bugzilla.redhat.com/show_bug.cgi?id=1251353 talk about jemalloc, tcmalloc before this...
  9. A

    > 3000 mSec Ping and packet drops with VirtIO under load

    virtio-scsi controller (and other scsi controllers), only apply to scsi disks.
  10. A

    [SOLVED] Large backup file-size for small VMs - why & workarounds?

    Note that in recent kernel (>= 4.7), "discard" mount option is now async, so you can enable it without write penality. (and no more fstrim cron is needed)
  11. A

    Proxmox 5 & Ceph Luminous/Bluestore super slow!?

    you can try to boot on kernel 4.4 from old proxmox, it should works. (just to compare). Which network card do you use ?
  12. A

    Proxmox 5 & Ceph Luminous/Bluestore super slow!?

    hi, do you use same hardware before and after bluestore convertion ? (nvme was used for filestore journal ?)
  13. A

    Proxmox 5 and ceph luminous: can't create monitor

    It's a rocksdb bug with sse4.2 . cephs devs have make a pull request in rocksdb github https://github.com/facebook/rocksdb/pull/2807
  14. A

    what are the most stable disk settings

    for ceph, cache=writeback can improve sequential write (but increase latency for read). you can use discard with ceph && zfs, no problem.
  15. A

    balloon service doesn't start in Windows 2008 server

    do you have installed the balloon driver before trying to start the service ?
  16. A

    Largest cluster *without* HA

    It's impossible to do it in 1 cluster. I think corosync have a hardlimit in code, around 100 nodes. (and this need a lot of tuning of corosync) I never have tested more than 20 nodes. I think the proxmox plan was to be able in the future, in 1 interface, to manage multiple clusters. But...
  17. A

    Procedure to import vmware OVA to Proxmox 5.0-23 with ZFS VM store

    step 4->6 : use new "qm importdisk" from proxmox5 ;) I'll directly read the vmdk and write it to destination storage (zfs, ceph, ...), without intermediate step.
  18. A

    > 3000 mSec Ping and packet drops with VirtIO under load

    If it's really a proxmox upgrade bug, I think it can be only 2 thing : qemu version, or kvm version (host kernel module). Maybe can you try to install proxmox 4 pve-qemu-kvm and pve-kernel deb package on your proxmox 5 installation try first with qemu: ----------------------------- wget...
  19. A

    [SOLVED] ceph mgr and mon issue after upgrade to Luminous

    systemctl (status|stop|start) ceph-mon@0 (remplace 0 with id of the monitor , same for ceph-osd, .. )

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!