[SOLVED] Issues with PVE 5.0 installer when there is a Digital Devices Cine S2 v6.5 DVBS card

m00n

New Member
Apr 8, 2015
9
0
1
Hi,

maybe the 5.0VE installer has a showstopper bug, when installing to a ZFS Raid-1 mirror set with 2 SSDs. The Installer stopps with an error msg. during creating the partitions.
The error msg. is: unable to create swap space.
I have no clue, whats wrong. Before I start a BR I wanted to ask here before.

With 4.4VE installation I had no problems. The installer created the mirror set without any problems.

Step by step:

HW: Supermicro X10SLH-F, Xeon 1275LV3 (socket 1150), 32GB ECC, 2x Samsung 850 PRO SSD (256MB each).

Proxmox ISO-file: 5.0-af4267bf-4.iso
ISO via. dd to a Sandisk USB Stick (16GB). Booting works without problems. After booting from USB the Proxmox installer showed up.

1) Under opions I choosed "filesystem zfs RAID 1"
2) The installer showed 3 entries:
  1. Harddisk 0 /dev/sda (238GB Samsung SSD)
  2. Harddisk 1 /dev/sdb (238GB Samsung SSD)
  3. Harddisk 2 /dev/sdc (14GB, Extreme)
3) I changed Harddisk 2 to: -- do not use --

(These settings worked with Proxmox VE 4.4 without any problems.)

4) I did not changed anything under Advanced options.
5) Now following the Installer and entering Password, IP-Settings, etc.
6) When the installer begins to create partitions the installer stopped with "Installation failed"
7) Dialog-Box opend: "unable to create swap space"
8) Switching to console with CTRL+ALT+F2: see picture

In another attemp, I tried to remove all partitions on both SSDs before setup. But same result.

I dont want to create everything manually so I switched back to 4.4VE.

Anyone has any idea? Maybe a bug in the installer?

Thx, kai


Console output after the installer stopped with an error:
Screenshot_20170724_232032.png
 
There was problem with using disks which have been previously used somehow (and even cleaning with dd did not help), but I thought this had been fixed in PVE 5 "final" already...
 
please boot in debug mode, attempt the install and save the complete debug log (available in /tmp/install.log in the debug shell).
 
@Rhinox
I only removed all partitions and created a new GPT partition table via gparted on both drives. But these steps didn't solve the issue. I didn't try to overwrite with dd.

@fabian
I followed your instructions and uploaded the saved install.log file from the debug mode setup. Before booting the proxmox iso I manually deleted all partitions from both SSDs. So both drives were empty before starting the installer.
Setup options again filesystem zfs RAID 1
 

Attachments

  • install.log
    2.4 KB · Views: 8
looks like the device node for the zvol is not created (fast enough?). could you check whether it exists in the debug shell?

Code:
zfs list rpool/swap
ls -lh /dev/zvol/rpool/swap

if it does not, could you execute the following and then check again?

Code:
udevadm trigger --subsystem-match block
udevadm settle --timeout 10
 
looks like the device node for the zvol is not created (fast enough?). could you check whether it exists in the debug shell?

Code:
zfs list rpool/swap
ls -lh /dev/zvol/rpool/swap

if it does not, could you execute the following and then check again?

Code:
udevadm trigger --subsystem-match block
udevadm settle --timeout 10
Screenshot with output attached. There is no /dev/zvol.
 

Attachments

  • Screenshot_20170725_134116.png
    Screenshot_20170725_134116.png
    117.2 KB · Views: 10
Try to repeat all those installer-commands manually, in debug-console (everything that starts with "#" in your install.log). Or at least try that "zpool create -f ..." command. Maybe some error shows. You might also try to increase that udevadm-timeout...
 
Try to repeat all those installer-commands manually, in debug-console (everything that starts with "#" in your install.log). Or at least try that "zpool create -f ..." command. Maybe some error shows. You might also try to increase that udevadm-timeout...
Repeating all installer-commands from the beginning makes sense? As fair as I can see, the rpool was created correctly and there is a Raid-1 mirror. Also an 8.5G swap partition rpool/swap was created. But there is no device node /dev/zvol. So mkswap failed when executed, because the swap space couldn't be mounted. I tried to increase the timeout, but it doesn't change, still no device node. I have no clue what's going on.
 
@Rhinox
Ok, I manually erased all partitions and booted into debug-mode. I switched over to the console (CTRL+ALT+F1). Then I executed all installer-commands from the instal.log manually (lines beginning with #). I increased udevadm-timeout to 60 with no effect. All commands worked without errors. Only the last "mkswap -f /dev/zvol/rpool/swap" command failed, because of the missing device-node /dev/zvol. So we need another approach to solve the problem.
 
...Only the last "mkswap -f /dev/zvol/rpool/swap" command failed, because of the missing device-node /dev/zvol. So we need another approach to solve the problem.
/dev/zvol is directory, device node is elsewhere. But may I suggest you try to create it manually, with "mknod"?

FYI in my installation:
/dev/zvol is drwxr-xr-x root:root
/dev/zvol/rpool is drwxr-xr-x root:root

and in /dev/zvol/rpool there is link:
lrwxrwxrwx root:root swap -> ../../zd0

device-node /dev/zd0 is:
brw-rw---- root:disk 230,0

so you'd have to do something like:
mknod /dev/zd0 b 230 0
(and then use chgrp & chmod to fix group and access rights)
 
/dev/zvol is directory, device node is elsewhere. But may I suggest you try to create it manually, with "mknod"?

FYI in my installation:
/dev/zvol is drwxr-xr-x root:root
/dev/zvol/rpool is drwxr-xr-x root:root

and in /dev/zvol/rpool there is link:
lrwxrwxrwx root:root swap -> ../../zd0

device-node /dev/zd0 is:
brw-rw---- root:disk 230,0

so you'd have to do something like:
mknod /dev/zd0 b 230 0
(and then use chgrp & chmod to fix group and access rights)

Ofc, you're right, /dev/zvol and /dev/zvol/rpool are directories.

mknod is not necessary, because the node /dev/zd0 is existing.

This is what I have done now:

1) After the installation aborted, I switched to console:
Code:
/dev/zd0
was created.

2) Creating 2 directories manually:
Code:
mkdir -p /dev/zvol/rpool

3) swap link creation in /dev/zvol/rpool:
Code:
ln -s /dev/zd0 /dev/zvol/rpool/swap

4) Setting up swap area:
Code:
mkswap -f /dev/zvol/rpool/swap

With these steps above, the mkswap command from the install.log worked this time. But now I can't continue, because the installation was aborted before.

The ISO-Installer has to be fixed. I will go and file a bug-report now. Thx.

Curious, that no bug-report already exists and that nobody noticed this issue with Proxmox 5 til today.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!