Search results

  1. B

    HowTo: Upgrade Ceph Hammer to Jewel

    I am also experiencing this error and have not found a solution. Looking ceph's code, I suspect Proxmox is trying to disable a rbd feature that is now a default in Jewel. Unfortunately rdb returns an error whenever something attempts to disabled a feature that is already disabled. Specific...
  2. B

    qm start fails with timeout error, but vm is actually started

    I have not found a workaround yet - the issue still occurs for me also.
  3. B

    qm start fails with timeout error, but vm is actually started

    Here's the additional info (sorry, it's been a while since i've visited the forums, forgot this is always needed). Good call on testing a non-glusterfs VM. A VM with storage on a local directory does NOT have the issue. I suspect the issue is with glusterfs 3.5 versus 3.4 (I recently upgraded...
  4. B

    qm start fails with timeout error, but vm is actually started

    After taking the latest updates, whenever I start a vm either via the CLI or GUI, the task times out. However, the VM actually starts. This would just be a minor annoyance, but it appears to be preventing from live migrating my VMs because the qm start on the target node is not reported as...
  5. B

    Proxmox + ZFS + Gluster

    Everything I have is mostly consumer grade hardware, this is just a cluster for my personal use at home. One node is a Core i7 920, 16GB of ram, with three 3TB WD Reds. The other node is a Phenom 1090T, 16GB of RAM, with three 3TB WD Reds. They are interconnected with a 10GbE NIC. Each node...
  6. B

    Proxmox + ZFS + Gluster

    I also use zfsonlinux and gluster and have encountered a few performance snags here and there. Have you tried setting xattr=sa? It would only apply to files written after the change, but it significantly improved my metadata performance. Here's more info...
  7. B

    Poor fsyncs/second with mdadm raid

    I wanted to follow up with my quasi-resolution in case any else comes across this. I could never get mdadm raid 5 to behave with kvm and write at a decent speed. Outside kvm on the host the raid 5 gave great write speeds, but within a guest it was always poor. I have since switched to a mdadm...
  8. B

    Poor fsyncs/second with mdadm raid

    vmanz, Thank you for the suggestion. I tried going down to the 2.6.24 kernel, but unfortunately i'm still getting poor fsyncs/s. I also tried different i/o schedulers for each disk in the raid, but they made little difference. picard:~# pveversion -v pve-manager: 1.7-10...
  9. B

    Poor fsyncs/second with mdadm raid

    Here are the results: picard:~# dd if=/dev/zero of=/mnt/test/blah bs=4k count=400000 conv=fdatasync 400000+0 records in 400000+0 records out 1638400000 bytes (1.6 GB) copied, 19.4839 s, 84.1 MB/s picard:~# time (dd if=/dev/zero of=/mnt/test/blah bs=4k count=400000; sync) 400000+0...
  10. B

    Poor fsyncs/second with mdadm raid

    I'm noticing extremely slow write speeds within my VMs (virtio in on) and pveperf is show poor fsyncs/second. All my VM logical volumes reside on a mdadm software raid 5 consisting of three disks. hdparm is showing good read speeds from each disk and from the raid as a whole. Using dd as a write...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!