Proxmox VE on Debian Jessie with zfs - Hetzner

I was not able to use LARA as my Hetzner server is pretty old, but I made Promox with ZFS (RAID1) running by the following workflow:

1) Boot 64-Bit Linux rescue system
2) Install qemu via apt-get
3) Download Proxmox ISO to /proxmox.iso
3) Run qemu-system-x86_64 -m 1024 -hda /dev/sda -hdb /dev/sdb -cdrom /proxmox.iso -boot d -vnc :0
4) Connect via VNC to port 5900 on your server and run installation
5) Reboot rescue system
 
I was not able to use LARA as my Hetzner server is pretty old, but I made Promox with ZFS (RAID1) running by the following workflow:

1) Boot 64-Bit Linux rescue system
2) Install qemu via apt-get
3) Download Proxmox ISO to /proxmox.iso
3) Run qemu-system-x86_64 -m 1024 -hda /dev/sda -hdb /dev/sdb -cdrom /proxmox.iso -boot d -vnc :0
4) Connect via VNC to port 5900 on your server and run installation
5) Reboot rescue system

which VNC viewer did you use (just out of curiosity)
 
Hi guys!
That night, I began, according to your instructions, to install the ZFS on the proxmox from the installation iso. It took me 6 hours. There was a problem with ZFS.
That's how I solved it and my instruction.
My server PX61-NVMe

1. Boot Rescue linux 64
1.2 Check eth name udevadm test-builtin net_id /sys/class/net/eth0 | grep '^ID_NET_NAME_PATH'
2. Download proxmox iso wget -O /proxmox.iso http://download.proxmox.com/iso/ (or other version)
3. qemu-system-x86_64 -enable-kvm -m 10240 -hda /dev/nvme0n1 -hdb /dev/nvme1n1 -cdrom /proxmox.iso -boot d -vnc :0 (qemu now installed by default in rescue)
4. Connect VNC to your host address:5900
6. Install iso with zfs raid0/1/5/6/10 etc
1. Setup your hetzner ip!!!!!
2. Rename eth
15. reboot and Viola

Sorry for my engl. I am from Ukraine.
Thanx to all!
 
Last edited:
Here is some more useful information:

When you need to access your disk with ZFS, you can not use the Linux rescue.
Simply boot the FreeBSD rescue disc and mount the Proxmox filesystem:

Code:
mkdir /tmp/root
zpool import -f -R /mnt/root rpool

After that do not forget to export, otherwise Promox will not be able to import again:

Code:
zpool export rpool


You can test your installation by booting the Linux 64 rescue and connect via VNC by:

Code:
qemu-system-x86_64 -vnc :0 -boot c -drive file=/dev/sda,cache=none -drive file=/dev/sdb,cache=none -m 4G


There is a FAQ about networking Proxmox from Hetzner. But this covers only either bridged or routed network. Also IPv6 must be split into several networks.

Here is my much simpler network setup which allows to:
- Run Linux guest with it's own IPv4 (routed) and own IPv6 (bridged) from same IPv6 network.
- Run Mikrotik RouterOS guest with DHCP (bridged, need to get MAC-Address via Option "Separate MAC anfordern").

The Promox server:
Code:
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth0 inet6 manual

auto vmbr0
iface vmbr0 inet static
        address  <Server-IP>
        netmask  255.255.255.255
        gateway  <Server-Gateway>
        pointopoint <Server-Gateway>
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        up ip route add <Add-On IP1>/32 dev vmbr0
        up ip route add <Add-On IP2>/32 dev vmbr0
        up ip route add <Add-On IP3>/32 dev vmbr0

iface vmbr0 inet6 static
        address  <IPv6-Network>::2
        netmask  64
        gateway  fe80::1

A Linux client:
Code:
source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto ens18
iface ens18 inet static
    address <Add-On IP>
    netmask 255.255.255.255
    gateway <Server-IP>
    pointopoint <Server-IP>
    dns-nameservers 213.133.98.98 213.133.99.99 213.133.100.100
    dns-search <My-Domain>

iface ens18 inet6 static
    address <IPv6-Network>::4
    netmask 64
    gateway fe80::1
#   dns-nameservers 2a01:4f8:0:a0a1::add:1010 2a01:4f8:0:a102::add:9999 2a01:4f8:0:a111::add:9898
#   dns-search <My-Domain>
 
Here is some more useful information:

When you need to access your disk with ZFS, you can not use the Linux rescue.
Simply boot the FreeBSD rescue disc and mount the Proxmox filesystem:

Code:
mkdir /tmp/root
zpool import -f -R /mnt/root rpool

After that do not forget to export, otherwise Promox will not be able to import again:

Code:
zpool export rpool


You can test your installation by booting the Linux 64 rescue and connect via VNC by:

Code:
qemu-system-x86_64 -vnc :0 -boot c -drive file=/dev/sda,cache=none -drive file=/dev/sdb,cache=none -m 4G


There is a FAQ about networking Proxmox from Hetzner. But this covers only either bridged or routed network. Also IPv6 must be split into several networks.

Here is my much simpler network setup which allows to:
- Run Linux guest with it's own IPv4 (routed) and own IPv6 (bridged) from same IPv6 network.
- Run Mikrotik RouterOS guest with DHCP (bridged, need to get MAC-Address via Option "Separate MAC anfordern").

The Promox server:
Code:
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth0 inet6 manual

auto vmbr0
iface vmbr0 inet static
        address  <Server-IP>
        netmask  255.255.255.255
        gateway  <Server-Gateway>
        pointopoint <Server-Gateway>
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        up ip route add <Add-On IP1>/32 dev vmbr0
        up ip route add <Add-On IP2>/32 dev vmbr0
        up ip route add <Add-On IP3>/32 dev vmbr0

iface vmbr0 inet6 static
        address  <IPv6-Network>::2
        netmask  64
        gateway  fe80::1

A Linux client:
Code:
source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto ens18
iface ens18 inet static
    address <Add-On IP>
    netmask 255.255.255.255
    gateway <Server-IP>
    pointopoint <Server-IP>
    dns-nameservers 213.133.98.98 213.133.99.99 213.133.100.100
    dns-search <My-Domain>

iface ens18 inet6 static
    address <IPv6-Network>::4
    netmask 64
    gateway fe80::1
#   dns-nameservers 2a01:4f8:0:a0a1::add:1010 2a01:4f8:0:a102::add:9999 2a01:4f8:0:a111::add:9898
#   dns-search <My-Domain>


We are talk about "How to install Proxmox on ZFS native" (from iso). Hezner have imagesetup with proxmox, but it only with mdadm and LVM.
 
Anyone tried to install 5.0 with this?
I'm running into an issue:

command 'chroot /rpool/ROOT/pve-1 dpkg --force-confold --configure -a' failed with exit code 1 at /usr/bin/proxmoxinstall line 385

Edit:
installing 4.4 on the same machine and upgrading to 5.0 afterwards, works with no problems...

I used the current .iso from: http://download.proxmox.com/iso/
proxmox-ve_4.4-eb2d6f1e-2.iso 2016-12-15 10:22 522M
proxmox-ve_5.0-af4267bf-4.iso 2017-07-04 12:13 556M
 
Last edited:
Not. I'm was not trying. Because I have installed 4,4. I just upgraded to 5. But only one node. Proxmox 5 still have problems.
 
Hi all,

I have managed to install 4.4 (5 didn't went through with smae results as described above by "Drag_and_Drop") , however no network connectivity (server - SB41) .
LARA showed that server boots just fine, but has no connectivity even from inside of the server. Can anyone share interfaces file with me please ? (also i have an old Hetzner image install of proxmox , it only has eth0 there, in 4.4 setup i see vmbr0 used rather than eth0, maybe this is the place to play around ?)

my current settings after install (covered IP with xxx just not to expose it here during install :) ):
Code:
auto lo
iface lo inet loopback

auto vmbr0
iface vmbr0 inet static
   address 5.x.xxx.17
   netmask 255.255.255.255
   gateway 5.x.xxx.1
   bridge_ports eth0
   bridge_stp off
   bridge_fd 0

The old install of rpoxmox in old working server (Hetzner template):
Code:
auto  eth0
iface eth0 inet static
  address   176.xx.xxx.140
  broadcast 176.xx.xxx.159
  netmask   255.255.255.224
  gateway   176.xx.xxx.129
  # default route to access subnet
  up route add -net 176.xx.xxx.128 netmask 255.255.255.224 gw 176.xx.xxx.129 eth0

iface eth0 inet6 static
  address 2a01:xxx:xxx:xxx::2
  netmask 64
  gateway fe80::1

PS- I'm planning to use ONLY OpenVZ

Thanks
 
Last edited:
OK, i have figured it out - using juts hetzner setup on eth0 works fine:
Code:
# /etc/network/interfaces
### Hetzner Online GmbH - installimage
# Loopback device:
auto lo
iface lo inet loopback
#
# device: eth0
auto  eth0
iface eth0 inet static
      address   <Main IP>
      netmask   255.255.255.255
      pointopoint   <Gateway>
      gateway   <Gateway>
 
same error "command 'chroot /rpool/ROOT/pve-1 dpkg --force-confold --configure -a' failed with exit code 1 at /usr/bin/proxmoxinstall line 385" with last proxmox 5.0. Any news?
 
command 'chroot /rpool/ROOT/pve-1 dpkg --force-confold --configure -a' failed with exit code 1 at /usr/bin/proxmoxinstall line 385
any updates about this error?
 
please provide the version of the ISO you used and a debug log (available in /tmp/install.log when booted in debug mode).
 
same error "command 'chroot /rpool/ROOT/pve-1 dpkg --force-confold --configure -a' failed with exit code 1 at /usr/bin/proxmoxinstall line 385" with last proxmox 5.0. Any news?
You need to use the -enable-kvm flag.
 
Just a quick note:
After I installed 5.0 ISO, I noticed, that the naming for networkadapters changed to "Predictable Interface Names"
I dont know if this is normal for the latest ISO but if your interface is named like mine: "enp2s0"

Code:
root@hostname:/etc/apt# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether d4:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff

Its just a new name method.

en -- ethernet

p2 -- bus number (2)

s0 -- slot number (0)


https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/
 
Just a quick note:
After I installed 5.0 ISO, I noticed, that the naming for networkadapters changed to "Predictable Interface Names"
I dont know if this is normal for the latest ISO but if your interface is named like mine: "enp2s0"

Code:
root@hostname:/etc/apt# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether d4:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff

Its just a new name method.

en -- ethernet

p2 -- bus number (2)

s0 -- slot number (0)


https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/
nano /etc/default/grub
Insert this : net.ifnames=0 biosdevname=0
Save and exit
grub-mkconfig -O /boot/grub/grub.cfg
Change: nano /etc/network/interfaces ensX to ethX
Reboot.
 
  • Like
Reactions: Drag_and_Drop
actually I like the new formart :) first time I heared from this, but the pro arguments are mutch bigger than the cons^^
Admins need to be forced some times ;-)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!