@Stoiko Ivanov What can I say? Good hunch!! :D Message is still there but after few seconds system boots normally. Thank you very much for help!! Much appreciated! :)
@Stoiko Ivanov That's exactly the problem - nothing follows after this message, it's stuck there :(
cat /etc/kernel/cmdline
root=ZFS=rpool/ROOT/pve-1 boot=zfsroot@alexpiesel
cat /proc/cmdline
initrd=\EFI\proxmox\5.15.17-1-pve\initrd.img-5.15.17-1-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs...
@Stoiko Ivanov First of all thank you very much for response, much appreciated!! Honestly on Sunday morning I decided to recreated whole Proxmox host.
I couldn't find anything else to solve my problem - I stumbled upon 1st link you sent and saw also somewhere post about 'quiet' but I don't know...
Dear Community,
I made a simple mistake (thought I was writing in a container as root but it turned out it was proxmox host) and typed apt update && apt-dist upgrade with apt autoremove .
Right now after host reboot I am not able to boot, I get error -> efi stub loaded initrd from command...
@Dunuin I checked and nothing more there, only the same error which is created during backup:( I installed proxmox-backup-client in one of LXC which couldnt be backed up and I was able to send backup to PBS. Currently only automatic option for backup from PVE GUI is not possible to process... I...
Dear Community,
Another day and I created another problem to myself:eek:
I have Proxmox Backup Server which is running as a VM on my node. I want to backup three of my current LXC containers to PBS (there is mounted SSD which is used only by server). Sadly only 1 of 3 containers is able to...
OK, that's what I also read - not to mix drives in pool but at some point I saw everyone doing mirror ZFS with theirs HDDs and thought it would somehow work out...
OK, thank you for clearing that out! I'm still new and have a moments where I get really confused at the time see everywhere...
Dear Community,
Thanks to your great help and tutorials I found here I managed to successfully natively encrypt ZFS running Proxmox (disc no. 1 NVMe), then add my storage (disc no.2 -> 12TB) with
parted /dev/sda mklabel gpt
parted -a opt /dev/sda mkpart primary ext4 0% 100%
mkfs.ext4 -L...
@fabian
Of course you were right!:) Before mounting zfs set mountpoint=/ rpool/ROOT/pve-1 I had to first remove rootcopy pool -> zfs destroy -r rpool/copyroot and then I have only mountpoint ->
NAME USED AVAIL REFER MOUNTPOINT
rpool...
Kk, thank you!! I will do secure erase of nvme and clean install again, hopefully this time everything works out, I let you know later:)
EDIT: @fabian
Just to be 100% clear -> following this instrucition https://gist.github.com/yvesh/ae77a68414484c8c79da03c4a4f6fd55
Correct commands should be...
@fabian
mount
root@alexpiesel:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=16307412k,nr_inodes=4076853,mode=755,inode64)
devpts on /dev/pts type devpts...
@fabian
root@alexpiesel:~# lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- whoami
cmd/lxc_usernsexec.c: 417: main - Operation not permitted - Failed to unshare mount and user namespace
cmd/lxc_usernsexec.c: 462: main - Operation not permitted - Failed to read from pipe file...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.