Proxmox VE 6.4 available

Is it correct that

a) file replication is also available within PVE, not PBS
b) some filesystems cannot be mounted when doing file replication?

With some of my VMs it works perfectly, for others I only see the grub partition.
 
Hi,

I am running proxmox on a HP Microserver Gen8 since version 6.0 without a single issue, upgrades up to version 6.3 were flawless.
Today I decided to upgrade to 6.4 through the "PVE pve-no-subscription repository provided by proxmox.com", as I did in the past for 6.0->6.1 and so on and ran into the following issue:
Code:
# update-initramfs -u
update-initramfs: Generating /boot/initrd.img-5.4.106-1-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
/dev/disk/by-uuid/0F33-A6E6 contains no grub directory - skipping
/dev/disk/by-uuid/0F34-2277 contains no grub directory - skipping

The system did not boot at first so I cycled the power and it did boot from the second try without any assistance from my side, which is weird.

Anyone had this?
 
Last edited:
Would be nice if the file restore also allowed restore of files from "host" type backups, ie. direct file backups with proxmox-backup-client

I know this can be done from PBS gui, but not if the backups are encrypted :)
 
Hi,

I am running proxmox on a HP Microserver Gen8 since version 6.0 without a single issue, upgrades up to version 6.3 were flawless.
Today I decided to upgrade to 6.4 through the "PVE pve-no-subscription repository provided by proxmox.com", as I did in the past for 6.0->6.1 and so on and ran into the following issue:
Code:
# update-initramfs -u
update-initramfs: Generating /boot/initrd.img-5.4.106-1-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
/dev/disk/by-uuid/0F33-A6E6 contains no grub directory - skipping
/dev/disk/by-uuid/0F34-2277 contains no grub directory - skipping

The system did not boot at first so I cycled the power and it did boot from the second try without any assistance from my side, which is weird.

Anyone had this?
Found the issue - the boot partitions, in my case sda2 and sdb2 (I have 2 drives in zfs raid1) did not have a "grub" folder, nor had the "initrd.img-5.4.106-1-pve" and "vmlinuz-5.4.106-1-pve" files. Also, for some reason they were configured for uefi, which the Gen 8 does not have:
# proxmox-boot-tool status Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace.. 0F33-A6E6 is configured with: uefi,grub 0F34-2277 is configured with: uefi,grub
Rebuilding the boot partitions (using this guide: Host_Bootloader) fixed the issue.
This is after:
# proxmox-boot-tool status Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace.. 284D-3446 is configured with: grub 2CA4-A2F3 is configured with: grub
I had to manully remove the old id's from /etc/kernel/proxmox-boot-uuids

I hope this helps other folks that run into the same situation.
 
Last edited:
Would be nice if the file restore also allowed restore of files from "host" type backups, ie. direct file backups with proxmox-backup-client

I know this can be done from PBS gui, but not if the backups are encrypted :)
If CLI is OK then you can look into the proxmox-file-restore CLI tool, which this all bases off.

But if CLI would be OK you had already the proxmox-backup-client mount command which can be used to mount and explore any file-level backup interactively already.
A general GUI for the client is onm the roadmap, albeit rather a bit on distance.
 
a) file replication is also available within PVE, not PBS
File replication means file restore? If not, can you please explain to me what you meant here.
And if yes, then yes file restore is available directly in Proxmox VE. As it is the client side, and we have a full-blown hyper-visor at hand we can even do a bit more than in PBS natively, for example, file-restore for block-level (VM) backups or file-restore from encrypted backups.

b) some filesystems cannot be mounted when doing file replication?
Yes, for now we support most single-disk ones as long as they are on a partition.
But as mentioned in this thread already, full-disk and more complex, possible multiple disk spanning FS like ZFS, LVM, ... are in the works.
 
I did a raidz2 install of PVE 6.3 ~1 month ago.

Should I do a clean install of 6.4 to gain the ZFS boot improvements?
Do you use EFI to boot? (I.e., does ls /sys/firmware/efi exists on your booted system)
If, then there's no need to change anything, we optimized the boot with ZFS under EFI already in Proxmox VE 5.4 (2019).

Else, then yes, re-installation would give you the new, improved and safer boot with legacy-boot and ZFS as root filesystem.
But, we'll see if we can come up with a short how-to to change that live, on existing installations, as that is in general possible without re-installation.
 
I just tried an install of this on an Intel NUC 11 Pro (NUC11TNKi3).

It gets stuck at starting a root shell on TTY3. If I switch to TTY3 there is an xorg error about cannot run in framebuffer mode.

All the troubleshooting I can find about this mentions AMD Ryzen and AMD gfx cards :/

Edit: no solution yet. Advice on the forums is a) move the gfx card to another slot (mine is embedded) b) use a different monitor (?) c) wait 30 mins (?) d) boot proxmox using nomodeset e) fiddle around with grub.cfg f) fiddle around with xorg.conf driver loading g) fiddle around with xorg.conf resolutions h) install debian i) install xorg on another host (?)

Kind of crazytown time...
 
Last edited:
I just tried an install of this on an Intel NUC 11 Pro (NUC11TNKi3).

It gets stuck at starting a root shell on TTY3. If I switch to TTY3 there is an xorg error about cannot run in framebuffer mode.

All the troubleshooting I can find about this mentions AMD Ryzen and AMD gfx cards :/

Edit: no solution yet. Advice on the forums is a) move the gfx card to another slot (mine is embedded) b) use a different monitor (?) c) wait 30 mins (?) d) boot proxmox using nomodeset e) fiddle around with grub.cfg f) fiddle around with xorg.conf driver loading g) fiddle around with xorg.conf resolutions h) install debian i) install xorg on another host (?)

Kind of crazytown time...
or alternatively, use the Debian installer and install PVE on top of Debian Buster
 
or alternatively, use the Debian installer and install PVE on top of Debian Buster
Thanks @fabian! As others have pointed out in many other threads with the same issue, this is not preferable as Proxmox handly takes care of a few things during install that Debian does not. This is also a less clean method.

This issue seems to be quite prevelant - could we not have non x server install method? An unattended install path for example, would be very handy for headless servers and would be an easy workaround for this issue.
 
Welp, I ended up taking the ssd out and putting it in an older machine to install, then putting it back. Hoping I don't need to ever reinstall proxmox on this node...

Edit: Take that back, networking is all messed up :/
Edit edit: kernel 5.11 fixed the issue. I guess NUC 11 hardware is new enough that the gfx card prevents Proxmox from being installed and the nic works enough to look like its working but no traffic routes within VM's!
 
Last edited:
a text-mode installer is something we'd really like to have, but it requires disentangling some parts of the installer first. it is on our todo list!
 
a text-mode installer is something we'd really like to have, but it requires disentangling some parts of the installer first. it is on our todo list!
Cool!

By the way, did you take a look on encryption as well? Booting from an encrypted partition (ZFS / LUKS)?

That you actually have to enter the partition's encryption key via terminal (serial / keyboard / server management) or alternatively SSH and only with that the boot process continues?

I know there are some folks around manually putting this together by using PVE on top of Debian. I think some approaches could be discussed within the community.
 
Last edited:
or alternatively, use the Debian installer and install PVE on top of Debian Buster
I'm also having issues with a Z590 based board here :(

Why can the Debian installer boot into graphical installation but Proxmox can not?
Can't you rebase with upstream Debian?

I think Proxmox itself does not recommend to install Proxmox on top of Debian - I am a bit confused now.
Isn't there a way to make it work or bypass the graphical installer? Or maybe a kernel 5.11 based installer?
 
Last edited:
Why can the Debian installer boot into graphical installation but Proxmox can not?
because it's a completely different project and tool.

Can't you rebase with upstream Debian?
no, there's no relation between the debian installer and the Proxmox VE one. But as said, a TUI is planned for the PVE one.

I think Proxmox itself does not recommend to install Proxmox on top of Debian - I am a bit confused now.
We recommend prioritizing the Proxmox VE installer, but if that fails on your HW and ZFS as root FS is not a hard requirement for your setup then using the Debian installer is totally fine and OK.
Isn't there a way to make it work or bypass the graphical installer?
No, currently not. That way is covered by the Debian installer.

Or maybe a kernel 5.11 based installer?
That sadly does not help all graphical issues, so IMO going for the planned terminal user interface is the better way to solve this one and for all.
 
  • Like
Reactions: binaryanomaly
Hi @t.lamprecht ,

Thanks for your reply, I understand.

From what I read a 5.11 based installer would help with issues that are caused by latest drivers not being available for new hardware. That seems to be the root cause for the NUC and my Z590 issue.
Of course a non-gui installation option would solve it more broadly and fundamental.

I'll go with Debian then as I don't need a ZFS root fs. Thanks for your help.
 
after applying all updates - install the `pve-kernel-5.11` meta-package.
There were updates for the kernel pve-kernel-5.4.114-1 yesterday, but not for pve-kernel-5.11. You said, we could opt-in for 5.11 in the change logs. Why isn't the optional kernel updated?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!