Search results

  1. M

    Restore VM files from host

    Ah thanks…just read about that and came back to put that in here. Was able to pct mount and the files are now where I was looking.
  2. M

    Restore VM files from host

    Hello. I have upgraded one of my proxmox hosts from 6 to 7. One of the old VMs no longer boots up completely due to cgroup changes. Although I can use 'pct enter' to log into the VM, only 3 processes are running and the host warns that it won't start up completely due to the cgroup issue What I...
  3. M

    Restore: tar: no space left on device

    I suppose I could just copy that command and add /var/lib/php/sessions to an —exclude directive, huh? But what else do I need to do from the command line to set up the LXC container?
  4. M

    Restore: tar: no space left on device

    Hello. I'm trying to restore a backup for a 200G LXC container, but I get the following errors: ecovering backed-up configuration from 'nas01:backup/vzdump-lxc-109-2025_06_05-23_03_37.tar.zst' Logical volume "vm-401-disk-0" created. Creating filesystem with 52428800 4k blocks and 13107200...
  5. M

    VM won't start after 6 -> 7 upgrade

    I'm seeing this now... WARN: old systemd (< v232) detected, container won't run in a pure cgroupv2 environment! Please see documentation -> container -> cgroup version. Task finished with 1 warning(s)! Any way to get this working under v7 so I can at least get the os updated?
  6. M

    VM won't start after 6 -> 7 upgrade

    Hello. I have 1 VM that won't start after the update. It's a Debian 8 instance. I can get to the console via pct enter, but there are only 3 processes running, so it's hung. Nothing was wrong with the VM prior to the upgrade. Maybe due to the cgroup changes? Additionally, I can't back up the...
  7. M

    pve6-7 upgrade hung after grub-probe error disk not found

    Hello. I am running the update from pve6 to pve7. I've run pve6to7 and it runs cleanly. However, during apt upgrade I'm seeing a ton of 'leaked on vgs invocation' messages for all my /dev/mapper/pve-vm--xxx--disk--0 devices All the errors spewed untilit finally said "done". Now it's hanging at...
  8. M

    Two VMs won't start after pve7 -> 8 upgrade

    Hello. I've just upgraded pve7 to pve8. The pve7to8 script ran clean after I fixed a couple issues. Now that it's done, 10 of the 12 VMs booted fine. The 2 that didn't are Debian 10 and Debian 11. Here is a capture of the console logging. These events are looping... [FAILED] Failed to start...
  9. M

    After PVE 7.4 upgrade, booting could not find vmlinuz-5.15.131-2-pve

    Hi. Thanks. I did the upgrade through Proxmox's Upgrade link as opposed to apt update. root@proxmox:/var/log/apt# pveversion -v proxmox-ve: 7.4-1 (running kernel: 5.15.126-1-pve) pve-manager: 7.4-17 (running version: 7.4-17/513c62be) pve-kernel-5.15: 7.4-9 pve-kernel-5.4: 6.4-20 pve-kernel-5.3...
  10. M

    After PVE 7.4 upgrade, booting could not find vmlinuz-5.15.131-2-pve

    I performed a standard update on my 7.4 proxmox server to get the latest deb11 patches. I did not see any errors during the upgrade, but after rebooting the box for the new kernel, I got this error: Booting `Proxmox VE GNU/Linux` Loading Linux 5.15.131-2-pve... error: file...
  11. M

    Creating cluster with 5 nodes where many containers/VMs have same VMID/CTID

    Thanks, Lee. Yeah, I saw that. I'll have to try this in a lab. Backing up and restoring everything is just untenable. The main issue I gleaned from the manual is that there might be "ID conflicts". My takeaway is that, if I have 1 node using IDs 100, 101, 102 and a second node having completely...
  12. M

    Creating cluster with 5 nodes where many containers/VMs have same VMID/CTID

    Hello. I have 5 separate nodes running now and I'm planning to create 1 cluster for all of them. However, they each have VM/CT IDs starting at "100". Will this present a problem, as far as ID conflicts, or will pvecm resolve these automagically? If I have to change IDs on the 4 nodes I wish to...
  13. M

    Zpool replace with removed dead drive doesn't work

    Thanks again. I've added the new drive using it's by-id value and it's showing as part of the pool, and resilvering has begun. Once it's done, then I shall try again to remove the faulted drive. root@proxmox:~# zpool attach rpool sda2 /dev/disk/by-id/ata-ST2000DM008-2FR102_ZFL63932-part2...
  14. M

    Zpool replace with removed dead drive doesn't work

    Sorry to ask more, but I'm really nervous about the possibility of blowing up the raid set... The syntax of attach is: zpool attach [-fsw] [-o property=value] pool device new_device Given my pool 'rpool' has 'sda2' as what I'm assuming is the "device", would the proper command be: zpool attach...
  15. M

    Zpool replace with removed dead drive doesn't work

    I've used sgdisk to set up the new disk. Now the paritions match the disk in the pool. I'm thinking of adding it first to the mirror and let it resilver before figuring out how to remove the dead/removed disk. do I use 'zpool replace', 'zpool attach' or 'zpool add'? Do I use 'sdb' or the...
  16. M

    Zpool replace with removed dead drive doesn't work

    Hey. Dunuin. Incorrect terminology then on my part. I did install this server using a Proxmox installer image. I may have clicked on Initialize in the GUI for this new disk, but don't recall. It doesn't have any data at all on it, so no problem reformatting and repartitioning it. Is there a...
  17. M

    Zpool replace with removed dead drive doesn't work

    Hello, all. One of the drives in my zpool has failed, and so I removed it and ordered a replacement drive. Now that it's here, I am having problems replacing it. OS: Debian 11 Pve: 7.3-4 I've installed the replacement drive and it shows up under both lsblk and in the gui. Zpool status...
  18. M

    Differing retention settings for different containers

    Hey, all. I've purchased a new 4tb volume expressly for holding backups. I noticed that the in the UI, you set the retention policy on the volume itself, so I set this value to 3. However, I have 1 rather large container that almost completely fills the backup volume with 3 backups. So, I'd like...
  19. M

    nfs-kernel-server on lxc: yes or no?

    Hello, I've not been able to configure proxmox 5.4-16 to allow lxc containers to serve directories via NFS. I've heard all kinds of different answers on whether it's possible or not. Can someone from Proxmox answer this definitively for me please? I would rather not run my NFS server as qemu if...