Proxmox installation stuck on 3% "Creating LV's" PLEASE HELP GUYS

I also ran into this issue on my Beelink SER5 Max with the included Phison NVMe. I started the installer with the debug logging and got to a terminal and noticed that it was running this command while it was stuck at 3% creating the LVM volumes.

Code:
/sbin/lvconvert --yes --type thin-pool --poolmetadatasize 8732672K pve/data

I did an strace on its PID and saw this:

Code:
ioctl(3, BLKZEROOUT, [7157579776, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7158628352, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7159676928, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7160725504, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7161774080, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7162822656, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7163871232, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7164919808, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7165968384, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7167016960, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7168065536, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7169114112, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7170162688, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7171211264, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7172259840, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7173308416, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7174356992, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7175405568, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7176454144, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7177502720, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7178551296, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7179599872, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7180648448, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7181697024, 1048576] <detached ...>

So, it looks like it is zeroing out the NVMe drive. The process did eventually finish, but it was a relief to see the installer doing something in the strace. I hope this helps.
 
I also ran into this issue on my Beelink SER5 Max with the included Phison NVMe. I started the installer with the debug logging and got to a terminal and noticed that it was running this command while it was stuck at 3% creating the LVM volumes.

Code:
/sbin/lvconvert --yes --type thin-pool --poolmetadatasize 8732672K pve/data

I did an strace on its PID and saw this:

Code:
ioctl(3, BLKZEROOUT, [7157579776, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7158628352, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7159676928, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7160725504, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7161774080, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7162822656, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7163871232, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7164919808, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7165968384, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7167016960, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7168065536, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7169114112, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7170162688, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7171211264, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7172259840, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7173308416, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7174356992, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7175405568, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7176454144, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7177502720, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7178551296, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7179599872, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7180648448, 1048576]) = 0
ioctl(3, BLKZEROOUT, [7181697024, 1048576] <detached ...>

So, it looks like it is zeroing out the NVMe drive. The process did eventually finish, but it was a relief to see the installer doing something in the strace. I hope this helps.
I just found this thread Googling "proxmox create lvs stuck". I see it was started a couple of years ago, but then there's been a few posts, all regarding the N100.

I've installed Proxmox dozens of times and never seen this, so I wonder why it's an issue just on some of these N100 mini PCs?

Thanks for digging in @chill9!
 
Same here installing PVE 8.2 on a 2TB NVMe SSD (LVM/ext4). The HDD LED indicator flashes, meaning that the behavior fits to what chill9 described above - the installer seems to be zeroing out the disk. This also means that the size of your disk and its write performance will influence how long it will take.

My previous installation was on a 500GB SATA SSD, and there the installation was instantaneous - for some reason there was no zeroing out.

Anyway, the installer should at least inform about this behavior.
 
Just wait: Here installing on a Dell PowerEdge R540: 40 minutes on " creating LV's... " then it successfully went on with the installation
 
Just for future reference I had this on my machine. It has a 2tb ssd. It was stuck for maybe 15 to 20 mins. It was a re purposed drive.
 
I'm reusing an m.2 NVME and ran `wipefs -a /dev/...` before hand. Also stuck running `lvconvert` for about 5 minutes.
 
Last edited:
For people affected by this with NVMe drives - could you please share the make and model?
(`lspci -nnk` should provide enough information)
additionally the output of `cat /sys/block/nvmeXn1/queue/write_zeroes_max_bytes` would help (replace nvmeXn1 by your nvme drive)

Thanks!
 
For people affected by this with NVMe drives - could you please share the make and model?
(`lspci -nnk` should provide enough information)
additionally the output of `cat /sys/block/nvmeXn1/queue/write_zeroes_max_bytes` would help (replace nvmeXn1 by your nvme drive)

Thanks!
NVMe:
Phison Electronics Corporation PS5015-E15 PCIe3 (DRAM-less) [1987:5015]
512GB (477GB)

/sys/block/nvmeXn1/queue/write_zeroes_max_bytes:
131072
 
For people affected by this with NVMe drives - could you please share the make and model?
(`lspci -nnk` should provide enough information)
additionally the output of `cat /sys/block/nvmeXn1/queue/write_zeroes_max_bytes` would help (replace nvmeXn1 by your nvme drive)

Thanks!
in my case, I got stuck a 3% for 10 minutes and then the installation resumed.

02:00.0 Non-Volatile memory controller [0108]: Micron/Crucial Technology P2 NVMe PCIe SSD [c0a9:540a] (rev 01)
Subsystem: Micron/Crucial Technology P2 [Nick P2] / P3 / P3 Plus NVMe PCIe SSD (DRAM-less) [c0a9:540a]
Kernel driver in use: nvme
Kernel modules: nvme

cat /sys/block/nvme0n1/queue/write_zeroes_max_bytes
131072
 
Same here. I have an R730 with ~7TB of raid 10 2.5" SAS SSD. The lights were flashing on them for the roughly 5 minutes or so it hung.