Search results

  1. E

    Inject commans inside VM Guests

    See qm manual, specifically "qm sendkey" https://pve.proxmox.com/wiki/Manual:_qm Maybe someone has a better idea?
  2. E

    Understanding Proxmox 3.4 EOL and 4.0

    I think you should provide kernel/kvm updates during wheezy LTS and set a solid date, at least one year in the future, when you will discontinue providing those updates. Ideally that date would coincide with wheezy LTS EOL. Currently, LXC in Proxmox 4.0 does not provide the level of isolation...
  3. E

    When will 3.x become EOL?

    At a minimum I think Proxmox should consider providing kvm/kernel updates for 3.x until wheezy LTS ends or OpenVZ stops providing kernel updates.
  4. E

    Evaluation Regarding Proxmox, Storage Models, and Deployment

    Using LVM or raw has less overhead than qcow. I have a couple nodes with multiple TB filesystems storing raw files. When that filesystem needs fsck it can take a long time. With LVM you can avoid that. I think q-wolf is right, your storage is inadequate for your needs. I've moved all of our...
  5. E

    When will 3.x become EOL?

    Debian wheezy will be supported under LTS from Feb 2016 to May 2018. https://wiki.debian.org/LTS Will Proxmox 3.x remain supported until May 2018? Myself, I'm not ready to jump into DRBD 9. Others are not ready to leave OpenVZ. We need to know how much time we have so we can prepare for the...
  6. E

    Moving to LXC is a mistake!

    The reality of the situation is that OpenVZ does not run on newer kernels needed for LXC. Not only that, OpenVZ will NOT be ported to newer kernels a decision of the OpenVZ developers not Proxmox. OpenVZ is a dead project, only maintained in its current state locked to old kernels with old...
  7. E

    CEPH read performance

    Already did that. Yes, 4.0 is indeed slightly faster, I'll be moving to it soon. Qemu multithread IO will greatly improve things. I recall reading on pve-devel that multiple threads and backup do not work, that's why we are limited to one thread. Is it possible to enable more threads to...
  8. E

    CEPH read performance

    rados uses 16 threads by default. KVM provides a single IO thread. So I ran rados with one thread, its performance is nearly identical to what I see inside the VM. It to drastically improves if the data is already in the cache on the CEPH server. Read first time = slow Re-read = really fast...
  9. E

    CEPH read performance

    I think its set to deadline, I will try different schedulers and see if that helps. The monitors are on the same nodes as the OSDs. While running the rados read benchmark I could see the total read IO is really high on the CEPH nodes. When reading from inside the VM each CEPH node reads between...
  10. E

    CEPH read performance

    This does help in 3.x and 4.x but you are right its not a large difference. Seems like a 25-40% improvement. Here is something that I discovered that might help pinpoint the issue. I start a VM on Proxmox 3.x using iothreads, in it I read some large file using dd outputting to /dev/null This...
  11. E

    CEPH read performance

    I've not looked into erasure coding, would like to get this test cluster working well first. HA storage is the goal. We don't have what I would consider big data sets.
  12. E

    CEPH read performance

    You both bring up good points. I agree, this is not a disk problem, the rados benchmark proves that. The MTU is maxed out, connected mode is enabled. The switches have built in subnet manager. On this CEPH cluster one port is used for the public network, the other for the private. The Proxmox...
  13. E

    CEPH read performance

    Latency is the largest problem CEPH has. Back to my original issue. The 70MB/sec I got was due to cache on CEPH server nodes. Did some more testing where I dropped cache on CEPH servers and performance drops down to 40MB/sec. Do you think that maybe the issue is latency introduced by slower CPUs?
  14. E

    CEPH read performance

    We have 10G Infiniband for, each node has two ports for redundancy. I would likely add another dual port card so I have redundant public and private networks. Currently we only use DRBD so thats sufficient, not so sure it would be with CEPH and SSDs. We have 20 Proxmox nodes, I doubt I can...
  15. E

    CEPH read performance

    The write speeds are greatly improved with the SSD journals but even without them the write speeds have always been acceptable. I have two SSD per CEPH node, they are a mirror for the OS and have partitions for the journals. Not all journals are moved to the SSD yet, couple of disks suck with...
  16. E

    CEPH read performance

    This whole cluster was built mostly from decomissioned production stuff so its older. The three ceph nodes are: model name : AMD Phenom(tm) II X6 1100T Processor stepping : 0 cpu MHz : 3314.825 My lone Proxmox 4.x client is the same as the CEPH nodes. Thats where things run best with...
  17. E

    CEPH read performance

    With udo's udev rules in a debian 7 VM and this disk configuration: virtio0: ceph_rbd:vm-101-disk-1,cache=writeback,iothread=on,size=500G I get this: dd if=/dev/vda bs=1M 3384803328 bytes (3.4 GB) copied, 46.031 s, 73.5 MB/s Much better bit still feels like it could be better. Can iothread...
  18. E

    CEPH read performance

    rados bench -p rbd 450 write --no-cleanup Total time run: 451.708554 Total writes made: 12994 Write size: 4194304 Bandwidth (MB/sec): 115.065 Stddev Bandwidth: 33.7199 Max bandwidth (MB/sec): 196 Min bandwidth (MB/sec): 0 Average Latency: 0.555435 Stddev Latency: 0.514375 Max latency...
  19. E

    CEPH read performance

    I did disable scrubbing and deep scrubbing, did not help I've tested with cache=default/writeback/write-through all have poor read speed. I've increased read ahead in the VMs I have debian 7 and ubuntu 14.04 guests. Do you have any specific suggestions of settings I can try? Maybe my brain fell...
  20. E

    CEPH read performance

    I agree, got any other suggestions of tests to perform? What does your ceph.conf look like on your home cluster? The slowest disk was 115MB/sec, the disks are not the problem: root@vm4:~# dd if=/dev/zero of=/var/lib/ceph/osd/ceph-0/deleteme bs=1G count=10 oflag=direct 10+0 records in 10+0...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!