Search results

  1. T

    [SOLVED] Hypervisor kernel panic during backup to PBS

    During backups to PBS the hypervisor will do a hard crash, it is not consistent at which point it does it. Sometimes a backup succeeds, and sometimes it will not. But after a few backups 1 will fail and fully crash the hypervisor. Does anyone have any idea where I can start at debugging this...
  2. T

    Live migration failed - There's a migration process in progress

    Currently we are trying to live migrate a VM to another server within the same cluster. The first migration successfully migrated all the attached disks and got a hangup at the "VM-state" migration step. After 15 minutes of no progress I pressed the "Stop" button to abort the migration. Now...
  3. T

    LXC - pct remote-migrate fails

    I'm trying to remote-migrate my LXC containers between 2 separate clusters but it keeps failing. Remote VM migrations do succeed (both online/offline). At this point I can't seem to find the exact point the migration fails at. Things I have searched for: The error "failed: Insecure dependency...
  4. T

    Limit Ceph Luminous RAM usage

    I am trying to limit my osd RAM usage. Currently my osd's (3) are using ~70% of my RAM (the ram is now completely full and lagging the host). Is there a way to limit the RAM usage for each osd?
  5. T

    Increase Ceph recovery speed

    I'm in the middle of migrating my current osd's to Bluestore but the recovery speed is quite low (5600kb/s ~10 objects/s). Is there a way to increase the speed? I currently have no virtual machines running on the cluster so performance doesn't matter at the moment. Only the recovery is running.
  6. T

    Proxmox 5.0-23 Unable to start container after update

    One of my samba containers is refusing to start after the latest update from BETA to release and I have no idea what is causing it. I have 2 disks mounted from ceph on the container. Config file lxc.arch = amd64 lxc.include = /usr/share/lxc/config/ubuntu.common.conf lxc.monitor.unshare = 1...
  7. T

    [SOLVED] Proxmox 5.0-23 problem with rbd and containers

    Since the last update I have done from the BETA to the release version I am unable to delete my existing containers. Every time I try to delete a container I get this error message 2017-07-06 17:29:47.689666 7f4f08021700 0 client.1278217.objecter WARNING: tid 1 reply ops [] != request ops...
  8. T

    Compression or deduplication in Ceph

    I am currently running a proxmox 5.0 beta server with ceph (luminous) storage. I am trying to reduce the size of my ceph pools as I am running low on space. Does ceph have some kind of option to use compression or deduplication to reduce the size of the pool on disk?