Search results

  1. M

    After PVE 7.4 upgrade, booting could not find vmlinuz-5.15.131-2-pve

    Hi. Thanks. I did the upgrade through Proxmox's Upgrade link as opposed to apt update. root@proxmox:/var/log/apt# pveversion -v proxmox-ve: 7.4-1 (running kernel: 5.15.126-1-pve) pve-manager: 7.4-17 (running version: 7.4-17/513c62be) pve-kernel-5.15: 7.4-9 pve-kernel-5.4: 6.4-20 pve-kernel-5.3...
  2. M

    After PVE 7.4 upgrade, booting could not find vmlinuz-5.15.131-2-pve

    I performed a standard update on my 7.4 proxmox server to get the latest deb11 patches. I did not see any errors during the upgrade, but after rebooting the box for the new kernel, I got this error: Booting `Proxmox VE GNU/Linux` Loading Linux 5.15.131-2-pve... error: file...
  3. M

    Creating cluster with 5 nodes where many containers/VMs have same VMID/CTID

    Thanks, Lee. Yeah, I saw that. I'll have to try this in a lab. Backing up and restoring everything is just untenable. The main issue I gleaned from the manual is that there might be "ID conflicts". My takeaway is that, if I have 1 node using IDs 100, 101, 102 and a second node having completely...
  4. M

    Creating cluster with 5 nodes where many containers/VMs have same VMID/CTID

    Hello. I have 5 separate nodes running now and I'm planning to create 1 cluster for all of them. However, they each have VM/CT IDs starting at "100". Will this present a problem, as far as ID conflicts, or will pvecm resolve these automagically? If I have to change IDs on the 4 nodes I wish to...
  5. M

    Zpool replace with removed dead drive doesn't work

    Thanks again. I've added the new drive using it's by-id value and it's showing as part of the pool, and resilvering has begun. Once it's done, then I shall try again to remove the faulted drive. root@proxmox:~# zpool attach rpool sda2 /dev/disk/by-id/ata-ST2000DM008-2FR102_ZFL63932-part2...
  6. M

    Zpool replace with removed dead drive doesn't work

    Sorry to ask more, but I'm really nervous about the possibility of blowing up the raid set... The syntax of attach is: zpool attach [-fsw] [-o property=value] pool device new_device Given my pool 'rpool' has 'sda2' as what I'm assuming is the "device", would the proper command be: zpool attach...
  7. M

    Zpool replace with removed dead drive doesn't work

    I've used sgdisk to set up the new disk. Now the paritions match the disk in the pool. I'm thinking of adding it first to the mirror and let it resilver before figuring out how to remove the dead/removed disk. do I use 'zpool replace', 'zpool attach' or 'zpool add'? Do I use 'sdb' or the...
  8. M

    Zpool replace with removed dead drive doesn't work

    Hey. Dunuin. Incorrect terminology then on my part. I did install this server using a Proxmox installer image. I may have clicked on Initialize in the GUI for this new disk, but don't recall. It doesn't have any data at all on it, so no problem reformatting and repartitioning it. Is there a...
  9. M

    Zpool replace with removed dead drive doesn't work

    Hello, all. One of the drives in my zpool has failed, and so I removed it and ordered a replacement drive. Now that it's here, I am having problems replacing it. OS: Debian 11 Pve: 7.3-4 I've installed the replacement drive and it shows up under both lsblk and in the gui. Zpool status...
  10. M

    Differing retention settings for different containers

    Hey, all. I've purchased a new 4tb volume expressly for holding backups. I noticed that the in the UI, you set the retention policy on the volume itself, so I set this value to 3. However, I have 1 rather large container that almost completely fills the backup volume with 3 backups. So, I'd like...
  11. M

    nfs-kernel-server on lxc: yes or no?

    Hello, I've not been able to configure proxmox 5.4-16 to allow lxc containers to serve directories via NFS. I've heard all kinds of different answers on whether it's possible or not. Can someone from Proxmox answer this definitively for me please? I would rather not run my NFS server as qemu if...
  12. M

    Can't start some containers after upgrade 5.14 -> 6.1

    Everything is working again. Not sure why a dev directory was created in 2 of the container mount points, but that was probably the root cause. Issue solved.
  13. M

    Can't start some containers after upgrade 5.14 -> 6.1

    Trying a "zfs mount -a" displayed an error that subvol-105-disk-0 and subvol-114-disk-0 were not empty, and therefore couldn't be mounted. Both of those subdirs had an empty "dev" directory. Once I removed them, "zfs mount -a" worked, and I could start 105 and 114. Also, the mount on 104 is now...
  14. M

    Can't start some containers after upgrade 5.14 -> 6.1

    Evidently, the zfs pool did not auto-mount. I'm not sure if this is due to the upgrade from jessie -> buster, or the PVE upgrade. Issuing a 'zpool mount MEDIA' attached the drive to /MEDIA, and *one* container was able to start (104). However, one of the files were there. Other containers that...
  15. M

    Can't start some containers after upgrade 5.14 -> 6.1

    It looks like I may have mounted the 2TB disk on /MEDIA on the proxmox server itself, since there is an empty /MEDIA dir. Also the config looks like that /MEDIA dir is then shared onto 104. Trying to mount the device returns this error: root@proxmox:/media# mount /dev/sda1 /MEDIA mount...
  16. M

    Can't start some containers after upgrade 5.14 -> 6.1

    Hello. After upgrading my proxmox from jessie -> buster, several of my containers won't start. I had added a second disk through the UI, and stored several containers on it. The volume shows up in the UI as a ZFS volume, and the containers show up under Content, but attempts to start always...
  17. M

    NFS clarification – LXC container exporting an NFS share to other LXC containers

    Well, no takers yet. :/ What I've done is to create a VM for the host that needs to export the NFS share, and left the consumers as LXC containers. That works fine. I would like some confirmation under which circumstances that LXC containers cannot export NFS. I used to be able to in earlier...
  18. M

    NFS clarification – LXC container exporting an NFS share to other LXC containers

    Hello, all. I have to share an NFS dir from one host (server1) to three other hosts (server2-4). I've read many different threads here on this, and I'd like some clarification. 1) Firstly, Can an LXC container share an NFS mount point to other lxc containers on the same node? 2) If not, what...
  19. M

    Proxmox node: can't mount NFS share

    This is really strange. When I try to create the NFS storage-whatever, I get that mkdir error, but the OS actually mounts the share in /mnt/pve/.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!