Search results

  1. T

    Proxmox Upgrade

    Looking to perform the upgrade soon, any insight?
  2. T

    Proxmox Upgrade

    Currently looking to upgrade from 5 to 6 with Ceph hyper-converged environment. The pve5to6 script identifies a warning regarding the mon_host being bound to IP/port rather than without the port, however the ceph upgrade instructions state to do this after upgrading to Ceph 14 (after upgrading...
  3. T

    Ceph Slow Requests

    At all times, the SWAP seems to have been a result of the swappiness setting (at 40% it'll start using SWAP?). The IO delay is always around 10% on each of the 4 hosts, however. Any recommendation (i.e. what logs, debug logs, etc) to try to get to the bottom of what's causing this would be...
  4. T

    Ceph Slow Requests

    The IO delay on all nodes seems to sit around 10% too. The SWAP usage is also consistent (i.e. it's not spiking to 3GB, it's consistently sitting around 3GB) even though the RAM is 40-50%.
  5. T

    Ceph Slow Requests

    Checked all logs on all nodes and there doesn't seem to be anything indicating what OSD's were the cause. It may be unrelated but I've noticed that the SWAP usage on the hosts was pretty high (3GB+) although the RAM usage is only at 40-50%. The nodes have 80GB RAM and 1x Xeon E5-2620 v4's...
  6. T

    Ceph Slow Requests

    The logs only show what I've said really, the main log (ceph.log) shows health check failed: 2 slow requests are blocked > 32 sec (REQUEST_SLOW) then 1 slow request, then 3 slow, then 4 slow, then health check cleared after around 30 seconds and it's back to healthy. ceph-mon.x.log shows at the...
  7. T

    Ceph Slow Requests

    I've currently got a 4 node cluster running Ceph on Proxmox 5.1 and noticed recently I'm getting a lot of blocked requests due to request_slow. For example: 2019-02-12 11:47:33 cluster [WRN] Health check failed: 6 slow requests are blocked > 32 sec (REQUEST_SLOW) 2019-02-12 11:47:47 cluster...
  8. T

    Migration from VMware

    open-vm-tools were installed, and removed prior to the migration. I've just been able to boot the VM using a SCSI disk by running 'dracut --regenerate-all --force' then 'grub2-mkconfig -o /boot/grub2/grub.cfg'. This has so far worked on one Centos VM.
  9. T

    Migration from VMware

    I'm currently starting the migration of several VM's from ESX 6.0 to Proxmox 4.4 and seem to be experiencing some issues. I've tried several methods to move the data, but all show the same issue qemu-img converting the disk to RAW from VMDK qm importdisk to a PVE 5.0 host then restoring to a...