Search results

  1. T

    [SOLVED] Hypervisor kernel panic during backup to PBS

    Ran Memtest86 and found out it was indeed a memory instability running DDR5 with EXPO enabled. I'm not pushing pulling any real load or pushing for maximum performance, so it's no issue to disable EXPO and live with the small performance hit. Thanks for pointing me in the right direction...
  2. T

    [SOLVED] Hypervisor kernel panic during backup to PBS

    Just updated to Proxmox version 8.2 to see if that fixes anything, but sadly it did not help. I'm starting to suspect that there is a hardware/BIOS instability that is causing these issues.
  3. T

    [SOLVED] Hypervisor kernel panic during backup to PBS

    During backups to PBS the hypervisor will do a hard crash, it is not consistent at which point it does it. Sometimes a backup succeeds, and sometimes it will not. But after a few backups 1 will fail and fully crash the hypervisor. Does anyone have any idea where I can start at debugging this...
  4. T

    Live migration failed - There's a migration process in progress

    Hey Fiona, Issuing another migrate_cancel command using `qm monitor 162` does not seem to do anything. There is no command output or a syslog entry that says that it did anything. Just tried another migration of the VM after issuing the migrate_cancel and it failed with the same error as...
  5. T

    Live migration failed - There's a migration process in progress

    Currently we are trying to live migrate a VM to another server within the same cluster. The first migration successfully migrated all the attached disks and got a hangup at the "VM-state" migration step. After 15 minutes of no progress I pressed the "Stop" button to abort the migration. Now...
  6. T

    LXC - pct remote-migrate fails

    As a small update, I have tried editing the perl files for remove the strict mode that enforces the dependency sanitation. But this did not seem to have any effect on the execution of the code, it still throws the same errors so I think the strict mode is enforced from dependency higher up in...
  7. T

    LXC - pct remote-migrate fails

    I'm trying to remote-migrate my LXC containers between 2 separate clusters but it keeps failing. Remote VM migrations do succeed (both online/offline). At this point I can't seem to find the exact point the migration fails at. Things I have searched for: The error "failed: Insecure dependency...
  8. T

    Proxmox VE 7.2 released!

    I am creating a new ceph erasure coded pool using the following command: pveceph pool create slow_ceph --erasure-coding k=2,m=1,failure-domain=osd Using this I would expect a pool with 2 data chunks and 1 coding chunk. So that would mean I get a pool that is able to use 66% of my pools space as...
  9. T

    Limit Ceph Luminous RAM usage

    Currently I have 8GB of RAM and 3 3TB OSD's. I know I don't have enough RAM for the OSD's, hence the reason I want to limit it.
  10. T

    Limit Ceph Luminous RAM usage

    I am trying to limit my osd RAM usage. Currently my osd's (3) are using ~70% of my RAM (the ram is now completely full and lagging the host). Is there a way to limit the RAM usage for each osd?
  11. T

    Increase Ceph recovery speed

    I'm in the middle of migrating my current osd's to Bluestore but the recovery speed is quite low (5600kb/s ~10 objects/s). Is there a way to increase the speed? I currently have no virtual machines running on the cluster so performance doesn't matter at the moment. Only the recovery is running.
  12. T

    Proxmox 5.0-23 Unable to start container after update

    One of my samba containers is refusing to start after the latest update from BETA to release and I have no idea what is causing it. I have 2 disks mounted from ceph on the container. Config file lxc.arch = amd64 lxc.include = /usr/share/lxc/config/ubuntu.common.conf lxc.monitor.unshare = 1...
  13. T

    [SOLVED] Proxmox 5.0-23 problem with rbd and containers

    Reboot the server 3 times now and somehow it just magically went away, I no idea what was causing this issue.
  14. T

    [SOLVED] Proxmox 5.0-23 problem with rbd and containers

    Since the last update I have done from the BETA to the release version I am unable to delete my existing containers. Every time I try to delete a container I get this error message 2017-07-06 17:29:47.689666 7f4f08021700 0 client.1278217.objecter WARNING: tid 1 reply ops [] != request ops...
  15. T

    Compression or deduplication in Ceph

    In addition to switching to Bluestore I found that Ceph supports Cache Tiering. Would it be possible to add a "cold-storage" pool to the existing "hot-storage" pool (current pool are SSD's)? If so, witch of the 2 pools do I have to create the Proxmox RBD storage on? The "cold-storage" pool...
  16. T

    Compression or deduplication in Ceph

    I am currently running a proxmox 5.0 beta server with ceph (luminous) storage. I am trying to reduce the size of my ceph pools as I am running low on space. Does ceph have some kind of option to use compression or deduplication to reduce the size of the pool on disk?
  17. T

    Unable to install 5.0b1

    The steps I took for fixing it: Boot up an Ubuntu (desktop) environment on the server Open a Terminal Display the name of all VG's and PV's using "pvdisplay" Wipe all the VG's using "vgremove VGNAME" Wipe all the PV's using "pvremove PVNAME" Everything is up and running now ^^
  18. T

    Unable to install 5.0b1

    I have the same problem. I tried this but it still refuses to initialize the volume "/dev/sdb3".

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!