blank/black/empty console issue since update

Did you enable nesting for those LXC? Systemd of newer LXCs like Debian 11 might require this. Starting with PVE7 all LXCs will be created with nesting enabled by default but with PVE6 the default was disabled nesting. So if you created those LXCs with PVE6 nesting is probably not enabled.

Also keep in mind that not all LXCs won't run with PVE7 anymore. Because PVE7 dropped cgroup support so its required that the LXC supports cgroup2. PVE6 suppoted cgroup and cgroup2.
 
Last edited:
Any solutions to this? I found the same issue using debian LXC. Ubuntu is fine.
An easy solution is to change console mode from "tty" to "shell". Select the CT ===> go to: Options ===> Console mode ===> change it to "shell" and you'll see a working shell.
 
  • Like
Reactions: degudejung
I know this is oldish, but for anyone else who gets here by Googling the issue. With the debian-11-standard_11.3-1_amd64.tar.zst CT image, I had this same issue.
Changing the Console mode to /dev/console (I guess tty isn't available until login kicks off?) showed systemd waiting on "A start job is running for Raise network interfaces" for 5m1s before getting to the login.
Following the guidance here of avoiding DHCP on IPv6 and/or changing the timeout allowed for a timely start.
 
  • Like
Reactions: rickard2014
I know this is oldish, but for anyone else who gets here by Googling the issue. With the debian-11-standard_11.3-1_amd64.tar.zst CT image, I had this same issue.
Changing the Console mode to /dev/console (I guess tty isn't available until login kicks off?) showed systemd waiting on "A start job is running for Raise network interfaces" for 5m1s before getting to the login.
Following the guidance here of avoiding DHCP on IPv6 and/or changing the timeout allowed for a timely start.
Thanks. I've just tested this and it seems to be the issue. I created a Debian 11 CT and set the IPv6 to 'static', started the container and got a login prompt almost immediately. Then I stopped the CT and changed the IPv6 to DHCP, started the CT and it hanged there for a few minutes before giving me a login prompt. Changed it back to static and it worked fine again.
 
I know this is oldish, but for anyone else who gets here by Googling the issue. With the debian-11-standard_11.3-1_amd64.tar.zst CT image, I had this same issue.
Changing the Console mode to /dev/console (I guess tty isn't available until login kicks off?) showed systemd waiting on "A start job is running for Raise network interfaces" for 5m1s before getting to the login.
Following the guidance here of avoiding DHCP on IPv6 and/or changing the timeout allowed for a timely start.
Thank you, this also solved the same issue I was experiencing.
 
I had a very similar issue when upgrading from 6.4 through to 8.1. All of my containers didn't display anything on the console but booted up and I could ssh in. The above fix to switch the console mode to /dev/console worked as stated above (and many thanks for the posted solution :)). However I wanted to know why existing containers had this issue and new ones created from the exact same template didn't.

Basically systemd wasn't starting any agetty processes for the default 2 ttys. For some reason the relevant directory and symlinks were missing in the container. To re-instate the setup as per a newly created container, as root I did:

Code:
cd /etc/systemd/system
mkdir getty.target.wants
cd getty.target.wants
ln -s /lib/systemd/system/serial-getty@.service serial-getty@.service
reboot

The containers in question were Ubuntu. Granted I may have done something weird with those containers by toggling settings on the PVE panel way back when, but they are currently using the default settings. Anyway just in case it helps others...
 
Last edited:
I had a very similar issue when upgrading from 6.4 through to 8.1. All of my containers didn't display anything on the console but booted up and I could ssh in. The above fix to switch the console mode to /dev/console worked as stated above (and many thanks for the posted solution :)). However I wanted to know why existing containers had this issue and new ones created from the exact same template didn't.

Basically systemd wasn't starting any agetty processes for the default 2 ttys. For some reason the relevant directory and symlinks were missing in the container. To re-instate the setup as per a newly created container, as root I did:

Code:
cd /etc/systemd/system
mkdir getty.target.wants
cd getty.target.wants
ln -s /lib/systemd/system/serial-getty@.service serial-getty@.service
reboot

The containers in question were Ubuntu. Granted I may have done something weird with those containers by toggling settings on the PVE panel way back when, but they are currently using the default settings. Anyway just in case it helps others...
It seems that just creating the getty.target.wants directory is enough.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!