I have a 3 node cluster that is having this error.
It was ISO from proxmox, never debian, so no linux kernel packages besides pve's.
They did have a testing ceph installed ONCE, but never utilized, i removed everything on it.
I did try to add the repos back and do the upgrades again, but that...
Original was zfs, however the VM was tiny. just a basic headless fedora with freeipa installed. its a 1.57GB backup total of data.
Below is from pve
root@prox:~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.4-6 (running version: 6.4-6/be2fa32c)...
I have restored 2 other VM's and 1 other CT, which are large and went fine. However this one small lxc container that is 1.57GB will not restore it says ran out of disk space
recovering backed-up configuration from 'pbs:backup/ct/104/2021-05-10T01:14:32Z'
Logical volume "vm-104-disk-0"...
that cmdline check helped me fix it. I didnt rename all entries in grub.cfg when i changed the pool name.
I swear I did! But that fixed it, im back up. Thanks :)
I still get unable to mount / it's not empty?
I renamed my rpool to rootpool. And my data drives are on rpool. (Did this cause to replicate between cluster members I couldn't specify the zpool)
And I see zpool list shows the pools. An exit cmd gives me another kernel panic
I just tried that. It shows my zfs raid1. However when i do exit from the initramfs prompt. it then says kernal panic, attempt to kill init, (I think?)
Gonna reboot and try again. But I'm not sure this is the same issue. As it works fine until I run the latest updates
I had a node that I was updating to a minor version update in 6.x I forget which. It failed to boot, so I just used my cluster node to bring VM's back up.
So I didn't dig into the error as I had to reinstall anyways.
I've reinstalled using proxmox-ve_6.2-1.iso
All went fine, I havent done...
I have done the zfs import / export from a live usb to change the root name, however I forget how to update the boot settings so it will boot to the new zfs pool name.
Can someone remind me?
They are all debian containers. it started up that one time when i manually mounted it, so I would assume the file is there?
I would have no reason to think my files/containers are suddenly dead and lost their data just from a proxmox upgrade.
Sounds like there is still some rather major bug...
interesting, i did all that and I see the 105 container is mounted, however I get a new error about detecting OS distribution.
lxc-start 105 20190924192126.899 DEBUG conf - conf.c:run_buffer:326 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 105 lxc pre-start with output: unable to...
root@prox2:~# zfs get all rpool/data/subvol-105-disk-1 |grep -i mount
rpool/data/subvol-105-disk-1 mounted no -
rpool/data/subvol-105-disk-1 mountpoint /rpool/data/subvol-105-disk-1 default
rpool/data/subvol-105-disk-1 canmount...
Aha! I have a cluster and the 107 is just a replication from another node. lxc 105 is actually on this node. And it is not mounted! None of the ones on this node seem to be. Below is whats in /rpool/data/ and also what ID's are on this node
root@prox2:~# ls //rpool/data/
subvol-107-disk-1...
root@prox2:~# df /rpool/data/subvol-107-disk-1
Filesystem 1K-blocks Used Available Use% Mounted on
rpool/data/subvol-107-disk-1 5242880 2578176 2664704 50% /rpool/data/subvol-107-disk-1
root@prox2:~# zfs get all rpool/data | grep mount
rpool/data mounted...
Sorry Im following now!
It is not just directories, it is all the files also.
Example I see a ton of my .js and .php files from things installed in these containers
Also its a rather large tree, so cant get it all to you, So I put a small screenshot of part of the tree thats showing during...
it seems its all the containers files. Is this normal? if i rm -rf them what happens?
root@prox2:~# ls /rpool/data/
subvol-107-disk-1 subvol-109-disk-1 subvol-112-disk-0
subvol-108-disk-1 subvol-110-disk-1 subvol-112-disk-2
root@prox2:~#
cache is running, stop is not, the mount did not succeed see below.
root@prox2:~# systemctl status -l zfs-import-cache.service
● zfs-import-cache.service - Import ZFS pools by cache file
Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; enabled; vendor prese
Active: active...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.