Search results

  1. A

    [SOLVED] OSD high cpu usage

    @GPLExpert can you give me result of "cat /proc/cpuinfo" of a working node, and node2 ?
  2. A

    Live migration with local storage gives an error

    Ok, I have tested with iothread, and I have problem , migration crashing or qemu process crashing. Also with only 1disk. So it seem that qemu is buggy currently, for drive-mirror + livemigration at the same time when iothread is enabled. https://bugzilla.redhat.com/show_bug.cgi?id=1539530
  3. A

    Using Proxmox just for Ceph management

    yes, it should work. maybe a little bit overkill. you can give a try at https://www.openattic.org/ or wait for next ceph release (mimic), which should have integrated dashboards with management (create/delete/update).
  4. A

    can't add vlan tag x to interface bond0?

    you don't need to define vlan interfaces in /etc/network/interfaces if you define vlan tag in vm configuration, proxmox will create the bond0.vlan interface and a vmbr0v[vlan] bridge for you.
  5. A

    Optimizing proxmox

    they are running fine (around same perf than 3610 or 3710), but take care of dwpd 1.
  6. A

    VXLAN

    Hi, I'm currently working on implementation on vxlan + bgp evpn. This should give us something like vmware nsx. (with anycast gateway on proxmox host). This will work with linux bridge. I'll try to send patches next month.
  7. A

    transform virtio-blk to scsi-virtio

    it's only supported since win8/2012. (maybe 8.1,2012r2) (you also have a gui, "optimize disk")
  8. A

    Ceph - Using jemalloc

    note that since luminous + bluestore, jemalloc don't work well. (because of rocksdb) Ceph devs said that tcmalloc if fine now, since they have switched to async messenger.
  9. A

    [SOLVED] need clarification on cache settings for rbd-based storage

    if you are concerned about dataloss, cache=none. rbd_cache is 32mb (can be tuned), so even with fsync you can loose 32mb. (but you'll don't have filesystem corruption).
  10. A

    [SOLVED] need clarification on cache settings for rbd-based storage

    @David : do you have tried with a bigger file size ? (as it's random write, with a small file, you have more chance to have 2 block near each others, so writeback is usefull is this case).
  11. A

    [SOLVED] need clarification on cache settings for rbd-based storage

    if you enable cache=writeback on vm, it'll enable rbd_cache=true. ceph have a feature by default rbd cache writethrough until flush = true. That mean that it's waiting to receive a first fsync, before really enable writeback. So you are safe to enable writeback. Writeback is helping for...
  12. A

    Ceph Luminous with Bluestore - slow VM read

    The problem is coming from network latency + ceph latency. If you copy 1 file, sequentially and with small blocks, it's iodepth=1. (same with dd command for example). For each block, you'll have your network latency (0,1ms for example), you'll be able to do 10000 iops. if you do it with 4k...
  13. A

    Ceph Luminous with Bluestore - slow VM read

    how do you bench it ? rados bench use multiple thread. try to test with fio, with iodepth=32 for example.
  14. A

    apt-get upgrade broke my VE-3.2

    if you still have access through ssh: #apt-get install proxmox-ve should fix it
  15. A

    ceph, no thin provisioning with vm clone

    yes, it still missing support in librbd https://tracker.ceph.com/issues/20070
  16. A

    VZDump slow on ceph images, RBD export fast

    Hi, you can also use this external script, to backup with rbd snapshot, rbd-diff export feature. https://github.com/EnterpriseVE/eve4pve-barc Works very fine and a lot faster.
  17. A

    Ceph integration - clock skew

    Hi, I'm using chrony too now, no more clock screw and it's also able to manage leap second. https://chrony.tuxfamily.org/comparison.html
  18. A

    qm live migrate storage lvm

    Hi, I just see your message on https://www.frsag.org/pipermail/frsag/2018-January/009207.html, I have sent patch to fix it recently, and I should be fixed with last proxmox updates :) https://git.proxmox.com/?p=qemu-server.git;a=commit;h=87955688fda3f11440b7bc292e22409d22d8112f
  19. A

    Meltdown and Spectre Linux Kernel fixes

    Sorry, but that just mean that this specific poc only work on this "outdated" 4.9 kernel. That doesn't mean that's impossible to do the same on lasts kernels. (But yes, it's very difficult to exploit, but not impossible)
  20. A

    Live migration with local directories

    I have responded in the bugzilla, can you test to remove the socat timeout ?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!