Accessing Promox interface directly on host?

bfred

New Member
Oct 19, 2020
12
0
1
54
Hi!

I am helping a friend using Proxmox and a Windows guest. There are 2 Raid 1 disk and 1 4TB disk for backups. For some reasons the proxmox interface doesn't seem to start and I have tried a few things:
  • Boot for usb install disk and using rescue mode allow me to see those disks and mount them, though I am not sure it is very useful, unless maybe to find out what happened?
  • Booting normally brings up windows automatically which goes into rescue mode but none of the options available do anything. In fact using the shell mode I can see Windows cannot see the disks anymore and thus cannot start
  • When booting normally I cannot seem to be able to access the proxmox web gui as the machine did not seem to ask for an IP address, nor have a fixed IP. Using the rescue mode from the USB install key I can get an IP address but then I cant do much with it
With this in mind is there a way to access the proxmox admin interface from the host machine to manage the VMs and either create a new VM or recover one of the backups on the 4TB drive? Im ok with terminal commands, just could not find out how to do that.

And if anyone has another suggestion that would be very nice too.

Thank you very much.

Fred
 
I used to use a container in combination with directvnc to be able to run a graphical browser from the host console to the Proxmos web GUI.
However, if the problem is the host not getting an IP, such a (rather complicated) setup might not be necessary, Maybe we can debug you network configuration instead?
Can you please post the contents of your /etc/network/interfaces file? And the vmid.conf of the VM (located in /etc/pve/qemu-server/)?
 
Hi again!

Thank you very much for your prompt response. I struggled a bit to actually mount the zfs pool (not familiar at all). So after mounting rpool/ROOT/pve-1 I found /etc/network/interfaces but the /etc/pve folder is empty.

So here is interfaces (fixed IP at 192.168.31.100 and gw is correct):
Code:
auto lo
iface lo inet loopback

iface eno2 inet manual

auto vmbr0
iface vmbr0 inet static
    address 192.168.31.100
    netmask 255.255.255.0
    gateway 192.168.31.1
    bridge_ports eno2
    bridge_stp off
    bridge_fd 0

So I looked at /var/log/messages and the only few problems I found are:
Code:
Oct 20 09:43:00 phdserver1 kernel: [    9.704809] audit: type=1400 audit(1603161780.831:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/lxc-start" pid=1822 comm="apparmor_parser"
Oct 20 09:43:00 phdserver1 kernel: [    9.705135] audit: type=1400 audit(1603161780.831:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe" pid=1823 comm="apparmor_parser"
Oct 20 09:43:00 phdserver1 kernel: [    9.705138] audit: type=1400 audit(1603161780.831:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe//kmod" pid=1823 comm="apparmor_parser"
Oct 20 09:43:00 phdserver1 kernel: [    9.705454] audit: type=1400 audit(1603161780.831:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/man" pid=1819 comm="apparmor_parser"
Oct 20 09:43:00 phdserver1 kernel: [    9.705457] audit: type=1400 audit(1603161780.831:6): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_filter" pid=1819 comm="apparmor_parser"
Oct 20 09:43:00 phdserver1 kernel: [    9.705459] audit: type=1400 audit(1603161780.831:7): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_groff" pid=1819 comm="apparmor_parser"
Oct 20 09:43:00 phdserver1 kernel: [    9.706636] audit: type=1400 audit(1603161780.831:8): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default" pid=1818 comm="apparmor_parser"
Oct 20 09:43:00 phdserver1 kernel: [    9.706640] audit: type=1400 audit(1603161780.831:9): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-cgns" pid=1818 comm="apparmor_parser"
Oct 20 09:43:00 phdserver1 kernel: [    9.706644] audit: type=1400 audit(1603161780.831:10): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-mounting" pid=1818 comm="apparmor_parser"
Oct 20 09:43:00 phdserver1 kernel: [    9.706647] audit: type=1400 audit(1603161780.831:11): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-nesting" pid=1818 comm="apparmor_parser"
Oct 20 09:43:00 phdserver1 kernel: [    9.726336] new mount options do not match the existing superblock, will be ignored
Oct 20 09:43:00 phdserver1 kernel: [    9.728776] softdog: initialized. soft_noboot=0 soft_margin=60 sec soft_panic=0 (nowayout=0)
[...]
Oct 20 09:43:04 phdserver1 kernel: [   13.448297] e1000e: eno2 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx
Oct 20 09:43:04 phdserver1 kernel: [   13.448300] e1000e 0000:00:1f.6 eno2: 10/100 speed: disabling TSO
Oct 20 09:43:04 phdserver1 kernel: [   13.448350] vmbr0: port 1(eno2) entered blocking state
Oct 20 09:43:04 phdserver1 kernel: [   13.448351] vmbr0: port 1(eno2) entered forwarding state
Oct 20 09:43:04 phdserver1 kernel: [   13.448409] IPv6: ADDRCONF(NETDEV_CHANGE): vmbr0: link becomes ready
Oct 20 09:43:05 phdserver1 kernel: [   14.035081] xhci_hcd 0000:00:14.0: remove, state 4
[...]
Oct 20 09:43:06 phdserver1 kernel: [   15.197018] device tap101i0 entered promiscuous mode
Oct 20 09:43:06 phdserver1 kernel: [   15.213175] fwbr101i0: port 1(fwln101i0) entered blocking state
Oct 20 09:43:06 phdserver1 kernel: [   15.213177] fwbr101i0: port 1(fwln101i0) entered disabled state
Oct 20 09:43:06 phdserver1 kernel: [   15.213206] device fwln101i0 entered promiscuous mode
Oct 20 09:43:06 phdserver1 kernel: [   15.213224] fwbr101i0: port 1(fwln101i0) entered blocking state
Oct 20 09:43:06 phdserver1 kernel: [   15.213225] fwbr101i0: port 1(fwln101i0) entered forwarding state
Oct 20 09:43:06 phdserver1 kernel: [   15.214752] vmbr0: port 2(fwpr101p0) entered blocking state
Oct 20 09:43:06 phdserver1 kernel: [   15.214754] vmbr0: port 2(fwpr101p0) entered disabled state
Oct 20 09:43:06 phdserver1 kernel: [   15.214828] device fwpr101p0 entered promiscuous mode
Oct 20 09:43:06 phdserver1 kernel: [   15.214863] vmbr0: port 2(fwpr101p0) entered blocking state
Oct 20 09:43:06 phdserver1 kernel: [   15.214864] vmbr0: port 2(fwpr101p0) entered forwarding state
Oct 20 09:43:06 phdserver1 kernel: [   15.216427] fwbr101i0: port 2(tap101i0) entered blocking state
Oct 20 09:43:06 phdserver1 kernel: [   15.216429] fwbr101i0: port 2(tap101i0) entered disabled state
Oct 20 09:43:06 phdserver1 kernel: [   15.216481] fwbr101i0: port 2(tap101i0) entered blocking state
Oct 20 09:43:06 phdserver1 kernel: [   15.216483] fwbr101i0: port 2(tap101i0) entered forwarding state
Oct 20 09:43:10 phdserver1 kernel: [   19.798596] vfio-pci 0000:01:00.0: enabling device (0000 -> 0003)
Oct 20 09:43:11 phdserver1 kernel: [   19.902999] vfio-pci 0000:01:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258
Oct 20 09:43:11 phdserver1 kernel: [   19.903015] vfio-pci 0000:01:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
Oct 20 09:43:11 phdserver1 kernel: [   19.926927] vfio-pci 0000:01:00.1: enabling device (0000 -> 0002)
Oct 20 09:43:11 phdserver1 kernel: [   19.970919] vfio-pci 0000:01:00.3: enabling device (0000 -> 0002)
Oct 20 09:43:38 phdserver1 kernel: [   46.611096] fwbr101i0: port 2(tap101i0) entered disabled state
Oct 20 09:43:38 phdserver1 kernel: [   46.611286] fwbr101i0: port 2(tap101i0) entered disabled state
Oct 20 09:43:39 phdserver1 kernel: [   46.962209] fwbr101i0: port 1(fwln101i0) entered disabled state
Oct 20 09:43:39 phdserver1 kernel: [   46.962234] vmbr0: port 2(fwpr101p0) entered disabled state
Oct 20 09:43:39 phdserver1 kernel: [   46.962368] device fwln101i0 left promiscuous mode
Oct 20 09:43:39 phdserver1 kernel: [   46.962370] fwbr101i0: port 1(fwln101i0) entered disabled state
Oct 20 09:43:39 phdserver1 kernel: [   46.979065] device fwpr101p0 left promiscuous mode
Oct 20 09:43:39 phdserver1 kernel: [   46.979067] vmbr0: port 2(fwpr101p0) entered disabled state

So what else do you think I could do now?

Thank you.
 
Any running VM that has a network connection to your vmbr0 should be able to reach the web GUI at https://192.168.31.100:8006/, even when the external connection is not plugged in.
If eno2 is indeed your connected network port, the system should be externally reachable via that same address, unless you locked it out via the built-in firewall.
The /etc/pve/ directory only contains files when you boot the Proxmox host. Also, firewall settings and such can only be changed when this directory contains files AFAIK.

I don't know how to troubleshoot this issue without being able to boot the Proxmox host and being able to login to the console (or SSH or web GUI) to see and edit files in /etc/pve/.
If you have no console because the Windows VM automatically starts and takes the GPU you could try changing the kernel boot parameters and removing iommu=on or something?
 
I see.
When booting the machine there is no ping response to 192.168.31.100 and web connection times out.
I've just left the machine to stop at home. i'll return in probably an hour and try changing the kernel boot parameters then.

Thank you.
 
I have the Proxmox firewall enabled and I also don't get a ping reply, so this could be normal.
Could you access the web GUI or SSH or login to the console before? What did you change before it stopped working?
 
So yes before it was broken, it worked ;) Though since it is not my machine at all, I have very unclear details about how it was working.
I was wondering wich grub file I should edit: is it the one in the EFI partition (loader/entries/proxmox-5.4.41-1-pve.conf) or the one in /etc/default/grub? The other thing is since I am editing it from the USB install OS, running update-grub will do nothing to /boot/grub/grub.cfg.

Thank you.
 
Last edited:
During boot, you can edit the entries when the GRUB menu appears. See this Ubuntu example, quickly press e when on the boot menu and change amd_iommu=on or intel_iommu=on to off.
This will prevent VMs from starting, and hopefully you can login to the Proxmox console and look al the VM and host configuration files in /etc/pve/.

If you have no display when booting Proxmox, changing /boot/grub/grub.cfg has the most chance of working. There might be other kernel parameters that prevent Linux from using the console for output and login. You will need to (temporarily) remove those as well. It is also possible that settings in /etc/modprobe.d/, like blacklisting drivers or binding devices to vfio-pci, are preventing output to the console. In that case you will need to change them and run update-initramfs -u, which like update-grub, will require you to chroot into the Promox system after booting from USB. See this Ubuntu example on how to chroot.
 
Last edited:
So I changed intel_iommu=on to off (my prompt is asking for proxmox-5.4.41-1 or proxmox-5.3.18-3. off didn't get proxmox to start any better, so the problem is still here.

I am not sure how that would actually work to reinstall. Do you think the new install would automatically find the backups on the 4TB disk so I wouldn't have to worry about loosing everything? As I may have said I am not specially familiar with Proxmox and don't want to lose my friend's data.

Thank you.
 
A new install will WIPE the disks, so you would need to first copy everything you want to keep! Or remove the disk(s), install Proxmox to a new disk, and then add the original disk(s).

I thought the problem was the Windows VM starting and you did not have a Proxmox console to login? And I hoped that not starting the VM would resolve this.
Can you show the error you get when starting Proxmox? Does it show anything?
 
Actually there is no error, the screen just goes blank. The machine has 2 video cards: the mainboard one and a Nvidia. Windows outputs on the Nvidia and Proxmox on the mainboard card. Due to this problem we have now connected 2 monitors. Basically the mainboard bios show up, prompting for normal questions like access to bios, chose boot option or do nothing and start. Then grub kicks in with a nice Proxmox logo and then offers to boot with proxmox 5.4 or 5.3. By default it starts with proxmox 5.4.
Then ...it starts and the screen empties itself. On the other monitor Windows used to come up. Now we can access the Windows rescue screen by pressing some keys at boot time and discover that Windows doesn't see any disk.

So we can lose (in theory) that windows system because proxmox was supposed to make images on the 4TB disk which is also on that machine. Would installing a new proxmox system provide access to those backups and allow us to reinstall the latest one? Is there a way to check that they are there?

Thank you.
 
Do you see messages scrolling on the screen connected to the motherboard between GRUB and before it goes blank? The Proxmox console will blank after a minute by default. Pressing Enter on the keyboard will give you a login prompt again. This prompt will get lost in all messages because of starting the VMs, and you will need to press Enter once or twice when the messages stop.
But this requires that a keyboard is connected to the host, and it sounds like the keyboard is passed through to the Windows VM. Maybe try (all) other USB ports?
It also requires that the display (and iGPU) of the motherboard is not disabled via kernel parameters (press e in GRUB to look for them) of via files in /etc/modprobe.d/ (which are there when booting from USB). Can you show us all those settings?

Removing the Windows VM will not really help if you see no messages at all on screen between GRUB and it starting. Not starting VM automatically would help a little but we need access to /etc/pve/, which requires a console login, SSH or web GUI.

If you can mount the disks that are supposed to contain backups, you can see if there are files that start with vzdump somewhere that contain the backups.

If you can remove all physical disks and install a new Proxmox on a new disk, we should be able to add the original disks with VM storage and backups, but we still won't see the original /etc/pve/. You would need to create the VM configurations from memory.

Unless we get a login prompt and can inspect and change /etc/pve/ I don't know how to really debug /etc/pve/storage.cfg settings or the VM configuration. And at the moment, it sound like an error in the storage configuration or a disk that's gone bad (because the Windows VM reports a missing disk). But maybe someone else has a better idea?
 
Unfortunately there is no message scrolling at all. I can mount the disks (well only the 2 mirrored 1TB ones so far). I will go and try to mount the 4TB one with the backups.
We're looking for extra drive and install proxmox see what we can find. That is a good (and very safe) idea!
Really a lot of thanks for your help.

I'll update you once all is done. I'm home now and need to go to my friend's place to continue (had a party last night... a bit tired today :oops:)
 
I just want to interrupt and add a couple notes:

- Proxmox does only wipe disks that are selected for installation during the setup.
If you're not selecting your 4TB Backup drive, it will not wipe the data.

- All the data and the vm-configuration is supposed to be in the backup files (vzdump files as avw mentioned).
Once you reinstalled your system, you can configure the backup storage in the Web-GUI again, click on the storage
under the node and select "Content". You'll be able to click on your vzdump file and restore it.
Please note, that you need to configure a VM_Disk Storage first, that you need to select during the restore.
 
  • Like
Reactions: leesteken
ok so proxmox 6.2 is installed and under the node I can see the disk under the disk menu.

So I imported the backup disk under the terminal and now I can see the disk under the node / disk / ZFS. How can I access those backups and restore them to the original disks?

Thank you.
 
Last edited:
Using the web GUI, select Datacenter, Storage, Add Directory and select the directory that you used for backups. Select the one with a dump subfolder (not the dump folder itself) that contains files beginning with vzdump.
 
ok thank you. I don't think i will have access to the machine for the upcoming week as my friend is traveling. I'll follow up when he is back. As a side question is there a way to recover the proxmox settings from the previous installation? I have not touched the disk yet.
 
I think not. Most of those settings are in /etc/pve/ which is empty unless you get the original Proxmox to boot and (as I said before) can login, SSH, or access the web GUI.
Maybe someone else knows how to inspect the database behind /etc/pve/?
 
Or you could also boot an ubuntu live or something and try to mount the original proxmox on the disk. Then you'll have access to the files in /etc/pve. But there shouldn't be a lot of relevant config in your case.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!