Recent content by aderumier

  1. VM live migration with local storage

    Before add it to gui, I think we need to add a deny with iothread and multiple disks, as it's buggy. also need to verify how it's works with zfs local replication. (maybe simply forbid it)
  2. VM with 1TB memory +

    can you try without "numa: 1" ?
  3. Set MTU on Guest

    does it work for you ? (I don't have tested it)
  4. [SOLVED] OSD high cpu usage

    @GPLExpert can you give me result of "cat /proc/cpuinfo" of a working node, and node2 ?
  5. Live migration with local storage gives an error

    Ok, I have tested with iothread, and I have problem , migration crashing or qemu process crashing. Also with only 1disk. So it seem that qemu is buggy currently, for drive-mirror + livemigration at the same time when iothread is enabled. https://bugzilla.redhat.com/show_bug.cgi?id=1539530
  6. Using Proxmox just for Ceph management

    yes, it should work. maybe a little bit overkill. you can give a try at https://www.openattic.org/ or wait for next ceph release (mimic), which should have integrated dashboards with management (create/delete/update).
  7. can't add vlan tag x to interface bond0?

    you don't need to define vlan interfaces in /etc/network/interfaces if you define vlan tag in vm configuration, proxmox will create the bond0.vlan interface and a vmbr0v[vlan] bridge for you.
  8. Optimizing proxmox

    they are running fine (around same perf than 3610 or 3710), but take care of dwpd 1.
  9. VXLAN

    Hi, I'm currently working on implementation on vxlan + bgp evpn. This should give us something like vmware nsx. (with anycast gateway on proxmox host). This will work with linux bridge. I'll try to send patches next month.
  10. transform virtio-blk to scsi-virtio

    it's only supported since win8/2012. (maybe 8.1,2012r2) (you also have a gui, "optimize disk")
  11. Ceph - Using jemalloc

    note that since luminous + bluestore, jemalloc don't work well. (because of rocksdb) Ceph devs said that tcmalloc if fine now, since they have switched to async messenger.
  12. [SOLVED] need clarification on cache settings for rbd-based storage

    if you are concerned about dataloss, cache=none. rbd_cache is 32mb (can be tuned), so even with fsync you can loose 32mb. (but you'll don't have filesystem corruption).
  13. [SOLVED] need clarification on cache settings for rbd-based storage

    @David : do you have tried with a bigger file size ? (as it's random write, with a small file, you have more chance to have 2 block near each others, so writeback is usefull is this case).
  14. [SOLVED] need clarification on cache settings for rbd-based storage

    if you enable cache=writeback on vm, it'll enable rbd_cache=true. ceph have a feature by default rbd cache writethrough until flush = true. That mean that it's waiting to receive a first fsync, before really enable writeback. So you are safe to enable writeback. Writeback is helping for...
  15. Ceph Luminous with Bluestore - slow VM read

    The problem is coming from network latency + ceph latency. If you copy 1 file, sequentially and with small blocks, it's iodepth=1. (same with dd command for example). For each block, you'll have your network latency (0,1ms for example), you'll be able to do 10000 iops. if you do it with 4k...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!