Install PVE/PBS from ISO on Hetzner without KVM (Tutorial)

Jul 26, 2021
59
14
13
55
Hi guys,

I'm new to the forum and this is my first post. I want to share my working setup to install PVE/PBS from ISO on Hetzner dedicated servers without the need to order a KVM console. I know, there are already some manuals, but much of them are incomplete or does not work.

Short explanation: The advantage you get when installing from ISO is for example, to setup ZFS RAID on boot drive. Many of the Hetzner servers have only two disk drives. When installing from Debian 10, it's only possible to use Linux RAID with Hetzner InstallImage, but in my opion the buildin ZFS is more powerfull. On the other hand, with this setup it's possible to install PVE7/PBS2 without installing Debian 10 first, than make a dist-upgrade to Debian 11 and finaly install PVE7/PBS2. I'm personaly don't like this kind of installation and prefer the "clean way" with ISO. So thats the reasons I'm using this setup.

The example below describes the procedure for PVE7 on a Hetzner AX machine. For other versions, please replace the name of the ISO file so it fits the product you want to install. I've tested PBS2, but should also work with PVE6 and PBS1.

Install PVE7:

- boot you server into Hetzner rescue system (on Robot console activate rescue + reset server)
- connect via ssh to the rescue system
- if your server was already in use, I would recomend to start clean and delete all disks
- download PVE7 ISO to the rescue system:

Code:
wget http://download.proxmox.com/iso/proxmox-ve_7.0-1.iso

- directly after connected to the rescue system there will be an information displayed about the connected drives and their device names
- replace the device names of the example below with the decive names of your system. My server has two NVMe disks connected.
- start a VM that is booting from the PVE ISO file and map it to the physical drives of your server:

Code:
qemu-system-x86_64 -enable-kvm -smp 4 -m 4096 -boot d -cdrom ./proxmox-ve_7.0-1.iso -drive file=/dev/nvme0n1,format=raw,media=disk,if=virtio -drive file=/dev/nvme1n1,format=raw,media=disk,if=virtio -vnc 127.0.0.1:1

- for security reason the VNC output of the VM is set to the localhost IP. So you have to establish an ssh tunnel in order to access the machine.
- in case of using Putty, go to Connection->SSH->Tunnels and add a tunnel with source port 9501 + Destination 127.0.0.1:5901
- you can now connect with a second ssh session. After established start your VNC client on address 127.0.0.1:9501
- you will now see the setup screen of PVE. Make your dessions on all the setup options (for example create an ZFS RAID1) and finish the setup procedure
- once the installation is finished the VM will again boot into the PVE setup
- press ctrl+c on the first ssh session and terminate the VM (can be done without damage)
- if you try to start the physical server at this time it will not be reachable. An additional step has to be done.
- so start the VM a second time but without booting the ISO image:

Code:
qemu-system-x86_64 -enable-kvm -smp 4 -m 4096 -drive file=/dev/nvme0n1,format=raw,media=disk,if=virtio -drive file=/dev/nvme1n1,format=raw,media=disk,if=virtio -vnc 127.0.0.1:1

- the fresh installed PVE is now booting inside the VM for the first time
- for the reason that the network settings are different to the physical machine they will currently not working, so we have to edit them
- on VNC session login to the PVE using your desired password during the installation
- open the network configuration file:

Code:
nano /etc/network/interfaces

- for a routed network setup the default vmbr0 bridge should be removed (I think most of the people are using PVE on Hetzner with routed setup and subnets)
- also the network adapter name from the VM is different to the name of the physical machine. In my case the adapter was named ens3 but must be named with enp41s0
- please determine your correct adapter name and edit the configuration, so it should look something like this:

Code:
auto lo
iface lo inet loopback

auto enp41s0
iface enp41s0 inet static
        address x.x.x.x/xx
        gateway x.x.x.x

- maybe it's a good idea to add at least one bridge (eg: vmbr0) for the VM network right away.
- save the network config, make a clean shutdown of the system via VNC shell and wait untill the VM ist stopped
- reboot the physical server from rescue shell
- after a short while you should be able to access the web gui via the public IP (https://x.x.x.x:8006) and finish your setup
- don't forget to subscribe for the fantastic Promox products to take advantage of the stable enterprise repositories :-)

If you think there is something that could be optimized or done in a better way, please feel free to let me know your suggestions.

Please let me also know if you are interested to have a detailed look at my network configuration. I'm using a routed setup with a floating subnet and internal + external networks. On the external network I'm distributing the public IPs via DHCP to VMs that should have direct public access.

This is btw one piece of a complete cloud workplace solution for small business companies, based on Proxmox and the power of several other Linux products. In my case, it needs only one comercial license for a Windows terminal server that runs the applications. All the other stuff, like domain controller, file server, firewall and VPN is build on open source and easy to administrate. So if anyone need to build something similar, don't hesitate to get in touch with me. I would be happy to share my work :-)

Best Regards
Martin
 
I do a similar thing to get proxmox on in datacentres installing from rescue os in qemu.

My suggestion is regarding the network interfaces, the old naming scheme has consistency, you know first nic will be eth0 always, second eth1 etc.

So I tend to add net.ifnames=0 biosdevname=0 to the kernel cmdline, and then change the network device in the interface file accordingly (usually eth0).

You can also avoid the second boot, do a debug install, and when you select reboot it will instead just dump you back to the shell, import the pool and can do any edits you want, then export it when you done.
 
  • Like
Reactions: DerDanilo
I do a similar thing to get proxmox on in datacentres installing from rescue os in qemu.

My suggestion is regarding the network interfaces, the old naming scheme has consistency, you know first nic will be eth0 always, second eth1 etc.

So I tend to add net.ifnames=0 biosdevname=0 to the kernel cmdline, and then change the network device in the interface file accordingly (usually eth0).

You can also avoid the second boot, do a debug install, and when you select reboot it will instead just dump you back to the shell, import the pool and can do any edits you want, then export it when you done.
Many thanks for your suggestions. I tried the old naming scheme in one of my first attempts, but for some reason it didn't work for me. So if this is working for you, I'm pretty sure it was my mistake.
 
Thank you for this idea, it worked very well. Just playing with net.ifnames=0 biosdevname=0 requred KVM to connect also in my case. But I am very happy that it is running in this nice setup with zfs
 
Hi there,

thanks for your reply. Just a little update to my HowTo in the first Post:

Since a couple of weeks I experienced trouble during the setup procudure - sometimes the installer crashed and made a reboot while choosing the disk setup. After some tesiting I figured out how to fix this issue and modified the qemu command with a different machine type:

qemu-system-x86_64 -machine pc-q35-5.2 -enable-kvm -smp 4 -m 4096 -boot d -cdrom ./proxmox-ve_7.0-1.iso -drive file=/dev/nvme0n1,format=raw,media=disk,if=virtio -drive file=/dev/nvme1n1,format=raw,media=disk,if=virtio -vnc 127.0.0.1:1

This works very well for me.
 
  • Like
Reactions: ValkraS
Hi there,

thanks for your reply. Just a little update to my HowTo in the first Post:

Since a couple of weeks I experienced trouble during the setup procudure - sometimes the installer crashed and made a reboot while choosing the disk setup. After some tesiting I figured out how to fix this issue and modified the qemu command with a different machine type:

qemu-system-x86_64 -machine pc-q35-5.2 -enable-kvm -smp 4 -m 4096 -boot d -cdrom ./proxmox-ve_7.0-1.iso -drive file=/dev/nvme0n1,format=raw,media=disk,if=virtio -drive file=/dev/nvme1n1,format=raw,media=disk,if=virtio -vnc 127.0.0.1:1

This works very well for me.
Same problem with proxmox-ve_7.2-1.iso
Have solved this issue by installing from proxmox-ve_7.1-2.iso instead
 
Same problem with proxmox-ve_7.2-1.iso
Have solved this issue by installing from proxmox-ve_7.1-2.iso instead
Same happens to me, even if I use the 7.1 version instead, the installation still randomly crashes.
 
Same happens to me, even if I use the 7.1 version instead, the installation still randomly crashes.
Try as @MaLe mentioned
Bash:
qemu-system-x86_64 -machine pc-q35-5.2 -enable-kvm -smp 4 -m 4096 -boot d -cdrom ./proxmox-ve_7.0-1.iso -drive file=/dev/nvme0n1,format=raw,media=disk,if=virtio -drive file=/dev/nvme1n1,format=raw,media=disk,if=virtio
I think crashes are from -hda, -hdb stuff.
Try to use -machine and -drive instead of -hda
 
Try as @MaLe mentioned
Bash:
qemu-system-x86_64 -machine pc-q35-5.2 -enable-kvm -smp 4 -m 4096 -boot d -cdrom ./proxmox-ve_7.0-1.iso -drive file=/dev/nvme0n1,format=raw,media=disk,if=virtio -drive file=/dev/nvme1n1,format=raw,media=disk,if=virtio
I think crashes are from -hda, -hdb stuff.
Try to use -machine and -drive instead of -hda
Thanks, but I actually used his suggestion. Same thing happens.
 
Yes, I can truly confirm. We are running more than 30 machines with PVE on Hetzner servers (AMD & Intel series) without any issues for about 3 years. All these machines were installed with this procedure.

BTW: We are currently experiencing no issues on AMD machines during setup.
 
  • Like
Reactions: Jpppb
First off, thanks a lot! I’ve been pulling my hair out over this but can finally confirm your install method works. Although I had a small issue with vnc in the second qemu run (without cd attached). The solution provided by GitHub; Ariadata/proxmox-hetzner for that second run worked in my case (ssh instead of vnc (but I think I used the wrong pw, been at it for 13 hours now haha).

Final question, I cannot connect ping any public ip or domain (so it’s probably dns as I’ve gathered from the forums), would you mind sharing your (anonymized) pve-host config? I’ve tested the config without bridge as described above, and as mentioned in the GitHub link with a bridge, but cannot seem to get a connection out

Many thanks for the help you already provided

Edit; will move this to a seperate post or move to a relevant forum if requested. Didn’t want to provide too much info and take over the thread
 
Last edited:
No concerns from my side, I think we are still on topic with this thread. I have various network configuration for Hetzner. This one is without the need of an additional single IP or IP-Net. It's routing the outgoing connections from the VMs over the host IP. With portforwarding you can make VMs reachable from the internet. Please make sure you have it secure with some PVE firewall rules.

First you have to figure out the alt name of the network adapter in the Hetzner rescure console, because during installation of the PVE it was used the adapter name of the QEMU/KVM environment. You can use ip -a. In my case it's enp35s0.

Then activate IP forwarding. In /etc/sysctl.conf set net.ipv4.ip_forward=1 and activate it with sysctl -p

Finaly this is an example of the /etc/network/interfaces:

Code:
# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto enp35s0
iface enp35s0 inet static
   address 1.2.3.4/26
   gateway 5.6.7.8
#WAN Server

# The VM bridge
auto vmbr0
iface vmbr0 inet static
address 10.0.0.254
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
#LAN VM Net

# NAT enable
post-up iptables -t nat -A POSTROUTING -s '10.0.0.0/24' -o enp35s0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.0.0.0/24' -o enp35s0 -j MASQUERADE

# conntrack zones for outgoing connections
post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1

# Example for Portforwarding to VM HTTP from port 80 to VM 10.0.0.250 on port 80
post-up iptables -t nat -A PREROUTING -i enp35s0 -p tcp --dport 80 -j DNAT --to 10.0.0.250:80
post-down iptables -t nat -D PREROUTING -i enp35s0 -p tcp --dport 80 -j DNAT --to 10.0.0.250:80

You can use the isc-dhcp-server on bridge vmbr0 to provide IP adresses on the 10.0.0.0 net for the VMs. Please user 10.0.0.254 for the gateway. Let me know ig you need help with the configuration.
 
  • Like
Reactions: Jpppb
Have anyone tried this method with Proxmox 8? I am unable to get a new AX-51 to boot after install and changing the network configuration.

Server goes straight into BIOS and seem unable to find the bootloader on the NVMs.

Installing Proxmox 8 by using a KVM is also unsuccessful. The installer throws me a "Unable to unmount ZFS" after starting the installation.
 
Last edited:
But you can get a free physical kvm for 3 hours...

Means simply mount the iso, install pve8 how you want...
Then go into the pve GUI, install opnsense or pfsense.
Don't forget to passthrough an vmbr virtio nic.
Configure that nic as LAN, with the network/ip you want...
Enable ssh access inside opnsense...

Shutdown the VM, passthrough your physical nic and enable autostart.

Blacklist through the kvm the driver of your physical nic and reboot the host.

When it started again, opnsense will auto start, then you simply ssh into your opnsense.

Configure in the cli the physical nic as WAN, set your correct ip or enable simply dhcp, since hetzner provides an ipv4 dhcp server on wan...

Then enter the opnsense shell and disable the firewall with "pfctl -d" if i remember correctly.

Then you can simply access your opn/pfsense normally via the web GUI and configure everything to your likings.
Like setting up the wan interface properly, ipv4 and ipv6 as dhcp/dhcpv6...

On the lan interface, you desired lan network, whatever rfc1918 range you want...
On the lan interface ipv6 as static with your hetzner ipv6 but with ::1 in the end and /64 network...
Edit the dhcpv6_wan gateway in the gateways tab of opnsense to fe80::1

Enable dhcpv4 server and configure it to your likings...
Enable dhcpv6 server + router advertisements, or only router advertisments as stateless, whatever you like and desire to configure it.

Don't forget to edit your /etc/network/interfaces on Proxmox, while you still have kvm access, to the correct ip/subnet and gateway...
And set "bridge-ports none" as you have no physical port... Or as the physical port is passthroughed to opnsense...

And reboot the host once your opnsense or pfsense is halfway configured to apply the host network settings...

Setup in opnsense an wireguard connection, vpn into it and you can access Proxmox and every VM normally as you would be in the "LAN" of your hetzner server....

You can even block the GUI access on wan, because you can access opnsense through vpn...
But i usually only setup 2fa and leave the GUI accessable, in case something happens and i don't have vpn...
2fa with a good password is secure enough.

-----
You don't need any weird iptables modifications with an rfc1918 address on the WAN interface inside opn/pfsense...
I don't even understand why people do that on hetzner servers, where you get free physical kvm access anyway...

You get the kvm access if you klick inside the hetzner robot on your server in the support section, there is an option to request kvm access for 1/2/3 hours.

Cheers
 
Btw it can happen that you get an stupid physical kvm, which does only iso mounting via Samba.

But you can tell them to make an usb stick with pve8 and plug it into your server.
They do it for free either.

You can even add an free 50gb backup storage thingy, there you can upload the iso and activate samba access.
And you can use that for their shitty kvm.

But it can happen that you get the nice kvm device, where you can simply select the iso from your computer to mount on the server.

However if not, tell them to make a stick.

Have fun
 
Like I said, tried installing Proxmox 8 by using a KVM/IPMI. The installer throws me a "Unable to unmount ZFS" after starting the installation.
Aah i thought you meant that kvm boot option, that is actually like booting the server with qemu and passed through disks.

Yeah if you have issues with physical kvm then it's weird.
This looks like you have already an zpool probably on your disks?
Tho it's even then extremely weird.

You could boot into some live linux distribution and wipe disks?

Did you tryed an 7.4 installer? And simply update then to 8.0?
 
@ronaldst

I've successfully installed Proxmox 8 on an older AX41. So should work in generally. In my experience, you have to use UEFI boot on some newer machines.

Please try the UEFI variant:

Code:
wget http://download.proxmox.com/iso/proxmox-ve_8.0-2.iso
wget -qO- /root http://www.danpros.com/content/files/uefi.tar.gz | tar -xvz -C /root
qemu-system-x86_64 -enable-kvm -smp 4 -m 4096 -boot once=d -cdrom ./proxmox-ve_8.0-2.iso -drive file=/dev/nvme0n1,format=raw,media=disk,if=virtio -drive file=/dev/nvme1n1,format=raw,media=disk,if=virtio -vnc 127.0.0.1:1 -bios /root/uefi.bin
 
  • Like
Reactions: Skyrider
@ronaldst

I've successfully installed Proxmox 8 on an older AX41. So should work in generally. In my experience, you have to use UEFI boot on some newer machines.

Please try the UEFI variant:

Code:
wget http://download.proxmox.com/iso/proxmox-ve_8.0-2.iso
wget -qO- /root http://www.danpros.com/content/files/uefi.tar.gz | tar -xvz -C /root
qemu-system-x86_64 -enable-kvm -smp 4 -m 4096 -boot once=d -cdrom ./proxmox-ve_8.0-2.iso -drive file=/dev/nvme0n1,format=raw,media=disk,if=virtio -drive file=/dev/nvme1n1,format=raw,media=disk,if=virtio -vnc 127.0.0.1:1 -bios /root/uefi.bin

I can confirm this method works. Thank you!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!