Search results

  1. M

    pve6-7 upgrade hung after grub-probe error disk not found

    Hello. I am running the update from pve6 to pve7. I've run pve6to7 and it runs cleanly. However, during apt upgrade I'm seeing a ton of 'leaked on vgs invocation' messages for all my /dev/mapper/pve-vm--xxx--disk--0 devices All the errors spewed untilit finally said "done". Now it's hanging at...
  2. M

    Two VMs won't start after pve7 -> 8 upgrade

    Hello. I've just upgraded pve7 to pve8. The pve7to8 script ran clean after I fixed a couple issues. Now that it's done, 10 of the 12 VMs booted fine. The 2 that didn't are Debian 10 and Debian 11. Here is a capture of the console logging. These events are looping... [FAILED] Failed to start...
  3. M

    After PVE 7.4 upgrade, booting could not find vmlinuz-5.15.131-2-pve

    Hi. Thanks. I did the upgrade through Proxmox's Upgrade link as opposed to apt update. root@proxmox:/var/log/apt# pveversion -v proxmox-ve: 7.4-1 (running kernel: 5.15.126-1-pve) pve-manager: 7.4-17 (running version: 7.4-17/513c62be) pve-kernel-5.15: 7.4-9 pve-kernel-5.4: 6.4-20 pve-kernel-5.3...
  4. M

    After PVE 7.4 upgrade, booting could not find vmlinuz-5.15.131-2-pve

    I performed a standard update on my 7.4 proxmox server to get the latest deb11 patches. I did not see any errors during the upgrade, but after rebooting the box for the new kernel, I got this error: Booting `Proxmox VE GNU/Linux` Loading Linux 5.15.131-2-pve... error: file...
  5. M

    Creating cluster with 5 nodes where many containers/VMs have same VMID/CTID

    Thanks, Lee. Yeah, I saw that. I'll have to try this in a lab. Backing up and restoring everything is just untenable. The main issue I gleaned from the manual is that there might be "ID conflicts". My takeaway is that, if I have 1 node using IDs 100, 101, 102 and a second node having completely...
  6. M

    Creating cluster with 5 nodes where many containers/VMs have same VMID/CTID

    Hello. I have 5 separate nodes running now and I'm planning to create 1 cluster for all of them. However, they each have VM/CT IDs starting at "100". Will this present a problem, as far as ID conflicts, or will pvecm resolve these automagically? If I have to change IDs on the 4 nodes I wish to...
  7. M

    Zpool replace with removed dead drive doesn't work

    Thanks again. I've added the new drive using it's by-id value and it's showing as part of the pool, and resilvering has begun. Once it's done, then I shall try again to remove the faulted drive. root@proxmox:~# zpool attach rpool sda2 /dev/disk/by-id/ata-ST2000DM008-2FR102_ZFL63932-part2...
  8. M

    Zpool replace with removed dead drive doesn't work

    Sorry to ask more, but I'm really nervous about the possibility of blowing up the raid set... The syntax of attach is: zpool attach [-fsw] [-o property=value] pool device new_device Given my pool 'rpool' has 'sda2' as what I'm assuming is the "device", would the proper command be: zpool attach...
  9. M

    Zpool replace with removed dead drive doesn't work

    I've used sgdisk to set up the new disk. Now the paritions match the disk in the pool. I'm thinking of adding it first to the mirror and let it resilver before figuring out how to remove the dead/removed disk. do I use 'zpool replace', 'zpool attach' or 'zpool add'? Do I use 'sdb' or the...
  10. M

    Zpool replace with removed dead drive doesn't work

    Hey. Dunuin. Incorrect terminology then on my part. I did install this server using a Proxmox installer image. I may have clicked on Initialize in the GUI for this new disk, but don't recall. It doesn't have any data at all on it, so no problem reformatting and repartitioning it. Is there a...
  11. M

    Zpool replace with removed dead drive doesn't work

    Hello, all. One of the drives in my zpool has failed, and so I removed it and ordered a replacement drive. Now that it's here, I am having problems replacing it. OS: Debian 11 Pve: 7.3-4 I've installed the replacement drive and it shows up under both lsblk and in the gui. Zpool status...
  12. M

    Differing retention settings for different containers

    Hey, all. I've purchased a new 4tb volume expressly for holding backups. I noticed that the in the UI, you set the retention policy on the volume itself, so I set this value to 3. However, I have 1 rather large container that almost completely fills the backup volume with 3 backups. So, I'd like...
  13. M

    nfs-kernel-server on lxc: yes or no?

    Hello, I've not been able to configure proxmox 5.4-16 to allow lxc containers to serve directories via NFS. I've heard all kinds of different answers on whether it's possible or not. Can someone from Proxmox answer this definitively for me please? I would rather not run my NFS server as qemu if...
  14. M

    Can't start some containers after upgrade 5.14 -> 6.1

    Everything is working again. Not sure why a dev directory was created in 2 of the container mount points, but that was probably the root cause. Issue solved.
  15. M

    Can't start some containers after upgrade 5.14 -> 6.1

    Trying a "zfs mount -a" displayed an error that subvol-105-disk-0 and subvol-114-disk-0 were not empty, and therefore couldn't be mounted. Both of those subdirs had an empty "dev" directory. Once I removed them, "zfs mount -a" worked, and I could start 105 and 114. Also, the mount on 104 is now...
  16. M

    Can't start some containers after upgrade 5.14 -> 6.1

    Evidently, the zfs pool did not auto-mount. I'm not sure if this is due to the upgrade from jessie -> buster, or the PVE upgrade. Issuing a 'zpool mount MEDIA' attached the drive to /MEDIA, and *one* container was able to start (104). However, one of the files were there. Other containers that...
  17. M

    Can't start some containers after upgrade 5.14 -> 6.1

    It looks like I may have mounted the 2TB disk on /MEDIA on the proxmox server itself, since there is an empty /MEDIA dir. Also the config looks like that /MEDIA dir is then shared onto 104. Trying to mount the device returns this error: root@proxmox:/media# mount /dev/sda1 /MEDIA mount...
  18. M

    Can't start some containers after upgrade 5.14 -> 6.1

    Hello. After upgrading my proxmox from jessie -> buster, several of my containers won't start. I had added a second disk through the UI, and stored several containers on it. The volume shows up in the UI as a ZFS volume, and the containers show up under Content, but attempts to start always...
  19. M

    NFS clarification – LXC container exporting an NFS share to other LXC containers

    Well, no takers yet. :/ What I've done is to create a VM for the host that needs to export the NFS share, and left the consumers as LXC containers. That works fine. I would like some confirmation under which circumstances that LXC containers cannot export NFS. I used to be able to in earlier...