I have a 3 node cluster that is having this error.
It was ISO from proxmox, never debian, so no linux kernel packages besides pve's.
They did have a testing ceph installed ONCE, but never utilized, i removed everything on it.
I did try to add the repos back and do the upgrades again, but that...
Original was zfs, however the VM was tiny. just a basic headless fedora with freeipa installed. its a 1.57GB backup total of data.
Below is from pve
root@prox:~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.4-6 (running version: 6.4-6/be2fa32c)...
I have restored 2 other VM's and 1 other CT, which are large and went fine. However this one small lxc container that is 1.57GB will not restore it says ran out of disk space
recovering backed-up configuration from 'pbs:backup/ct/104/2021-05-10T01:14:32Z'
Logical volume "vm-104-disk-0"...
that cmdline check helped me fix it. I didnt rename all entries in grub.cfg when i changed the pool name.
I swear I did! But that fixed it, im back up. Thanks :)
I still get unable to mount / it's not empty?
I renamed my rpool to rootpool. And my data drives are on rpool. (Did this cause to replicate between cluster members I couldn't specify the zpool)
And I see zpool list shows the pools. An exit cmd gives me another kernel panic
I just tried that. It shows my zfs raid1. However when i do exit from the initramfs prompt. it then says kernal panic, attempt to kill init, (I think?)
Gonna reboot and try again. But I'm not sure this is the same issue. As it works fine until I run the latest updates
I had a node that I was updating to a minor version update in 6.x I forget which. It failed to boot, so I just used my cluster node to bring VM's back up.
So I didn't dig into the error as I had to reinstall anyways.
I've reinstalled using proxmox-ve_6.2-1.iso
All went fine, I havent done...
I have done the zfs import / export from a live usb to change the root name, however I forget how to update the boot settings so it will boot to the new zfs pool name.
Can someone remind me?
They are all debian containers. it started up that one time when i manually mounted it, so I would assume the file is there?
I would have no reason to think my files/containers are suddenly dead and lost their data just from a proxmox upgrade.
Sounds like there is still some rather major bug...
interesting, i did all that and I see the 105 container is mounted, however I get a new error about detecting OS distribution.
lxc-start 105 20190924192126.899 DEBUG conf - conf.c:run_buffer:326 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 105 lxc pre-start with output: unable to...
root@prox2:~# zfs get all rpool/data/subvol-105-disk-1 |grep -i mount
rpool/data/subvol-105-disk-1 mounted no -
rpool/data/subvol-105-disk-1 mountpoint /rpool/data/subvol-105-disk-1 default
rpool/data/subvol-105-disk-1 canmount...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.