Recent content by MikeC

  1. M

    Restore VM files from host

    Ah thanks…just read about that and came back to put that in here. Was able to pct mount and the files are now where I was looking.
  2. M

    Restore VM files from host

    Hello. I have upgraded one of my proxmox hosts from 6 to 7. One of the old VMs no longer boots up completely due to cgroup changes. Although I can use 'pct enter' to log into the VM, only 3 processes are running and the host warns that it won't start up completely due to the cgroup issue What I...
  3. M

    Restore: tar: no space left on device

    I suppose I could just copy that command and add /var/lib/php/sessions to an —exclude directive, huh? But what else do I need to do from the command line to set up the LXC container?
  4. M

    Restore: tar: no space left on device

    Hello. I'm trying to restore a backup for a 200G LXC container, but I get the following errors: ecovering backed-up configuration from 'nas01:backup/vzdump-lxc-109-2025_06_05-23_03_37.tar.zst' Logical volume "vm-401-disk-0" created. Creating filesystem with 52428800 4k blocks and 13107200...
  5. M

    VM won't start after 6 -> 7 upgrade

    I'm seeing this now... WARN: old systemd (< v232) detected, container won't run in a pure cgroupv2 environment! Please see documentation -> container -> cgroup version. Task finished with 1 warning(s)! Any way to get this working under v7 so I can at least get the os updated?
  6. M

    VM won't start after 6 -> 7 upgrade

    Hello. I have 1 VM that won't start after the update. It's a Debian 8 instance. I can get to the console via pct enter, but there are only 3 processes running, so it's hung. Nothing was wrong with the VM prior to the upgrade. Maybe due to the cgroup changes? Additionally, I can't back up the...
  7. M

    pve6-7 upgrade hung after grub-probe error disk not found

    Hello. I am running the update from pve6 to pve7. I've run pve6to7 and it runs cleanly. However, during apt upgrade I'm seeing a ton of 'leaked on vgs invocation' messages for all my /dev/mapper/pve-vm--xxx--disk--0 devices All the errors spewed untilit finally said "done". Now it's hanging at...
  8. M

    Two VMs won't start after pve7 -> 8 upgrade

    Hello. I've just upgraded pve7 to pve8. The pve7to8 script ran clean after I fixed a couple issues. Now that it's done, 10 of the 12 VMs booted fine. The 2 that didn't are Debian 10 and Debian 11. Here is a capture of the console logging. These events are looping... [FAILED] Failed to start...
  9. M

    After PVE 7.4 upgrade, booting could not find vmlinuz-5.15.131-2-pve

    Hi. Thanks. I did the upgrade through Proxmox's Upgrade link as opposed to apt update. root@proxmox:/var/log/apt# pveversion -v proxmox-ve: 7.4-1 (running kernel: 5.15.126-1-pve) pve-manager: 7.4-17 (running version: 7.4-17/513c62be) pve-kernel-5.15: 7.4-9 pve-kernel-5.4: 6.4-20 pve-kernel-5.3...
  10. M

    After PVE 7.4 upgrade, booting could not find vmlinuz-5.15.131-2-pve

    I performed a standard update on my 7.4 proxmox server to get the latest deb11 patches. I did not see any errors during the upgrade, but after rebooting the box for the new kernel, I got this error: Booting `Proxmox VE GNU/Linux` Loading Linux 5.15.131-2-pve... error: file...
  11. M

    Creating cluster with 5 nodes where many containers/VMs have same VMID/CTID

    Thanks, Lee. Yeah, I saw that. I'll have to try this in a lab. Backing up and restoring everything is just untenable. The main issue I gleaned from the manual is that there might be "ID conflicts". My takeaway is that, if I have 1 node using IDs 100, 101, 102 and a second node having completely...
  12. M

    Creating cluster with 5 nodes where many containers/VMs have same VMID/CTID

    Hello. I have 5 separate nodes running now and I'm planning to create 1 cluster for all of them. However, they each have VM/CT IDs starting at "100". Will this present a problem, as far as ID conflicts, or will pvecm resolve these automagically? If I have to change IDs on the 4 nodes I wish to...
  13. M

    Zpool replace with removed dead drive doesn't work

    Thanks again. I've added the new drive using it's by-id value and it's showing as part of the pool, and resilvering has begun. Once it's done, then I shall try again to remove the faulted drive. root@proxmox:~# zpool attach rpool sda2 /dev/disk/by-id/ata-ST2000DM008-2FR102_ZFL63932-part2...
  14. M

    Zpool replace with removed dead drive doesn't work

    Sorry to ask more, but I'm really nervous about the possibility of blowing up the raid set... The syntax of attach is: zpool attach [-fsw] [-o property=value] pool device new_device Given my pool 'rpool' has 'sda2' as what I'm assuming is the "device", would the proper command be: zpool attach...