Partition error on new install

CharlesErickT

Member
Mar 15, 2017
52
8
8
31
Hello,

I've install Proxmox 5.1 from the ISO on 2 SSD in ZFS RAID 1. Here's my pveversion:

Code:
proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve)
pve-manager: 5.1-35 (running version: 5.1-35/722cc488)
pve-kernel-4.13.4-1-pve: 4.13.4-25
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-2
pve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.2-pve1~bpo90

I've installed 5.1 succesfully on other hosts but this host in question is giving me trouble. The difference is that this host is using all SSDs (Samsung SM1625). After a fresh install here's the output of fdisk (both sda and sdb have the same errors)

Code:
Disk /dev/sda: 372.6 GiB, 400088457216 bytes, 781422768 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: gpt
Disk identifier: 9F05F316-688B-49DB-A835-B1539FEFDEEF

Device         Start       End   Sectors   Size Type
/dev/sda1         34      2047      2014  1007K BIOS boot
/dev/sda2       2048 781406349 781404302 372.6G Solaris /usr & Apple ZFS
/dev/sda9  781406350 781422734     16385     8M Solaris reserved 1

Partition 1 does not start on physical sector boundary.
Partition 9 does not start on physical sector boundary.

Is there a way to correct the Partition 1 and 9 error ?

Thanks
 
1. It is not listed as error, just pure information.
2. I do not think it is necessary to play with it. The only partition that matters is /dev/sda2 (used for ZFS), which is aligned correctly.
3. I doubt you can fix it now. You could try re-install and set up partitions/ashift manually, but it is not worth the effort (see point 2).
 
  • Like
Reactions: guletz
Hi,

@Rinox are told you the true. More then he said, is the fact that in many cases, SSDs, will do not told you the true. In some case SSDs will show a value for sector size(minimum/optimum) but the firmware will use another's values.
Also I can say from your output that is the first time when I see a sector size=8k. I can guess that yours SSDs are designed for Desktop use, where a big sector size (>4k) is useful.
 
1. It is not listed as error, just pure information.
2. I do not think it is necessary to play with it. The only partition that matters is /dev/sda2 (used for ZFS), which is aligned correctly.
3. I doubt you can fix it now. You could try re-install and set up partitions/ashift manually, but it is not worth the effort (see point 2).


That would make sense. Would just changing the ashift make a difference during the install ?

Hi,

@Rinox are told you the true. More then he said, is the fact that in many cases, SSDs, will do not told you the true. In some case SSDs will show a value for sector size(minimum/optimum) but the firmware will use another's values.
Also I can say from your output that is the first time when I see a sector size=8k. I can guess that yours SSDs are designed for Desktop use, where a big sector size (>4k) is useful.

Thanks for your input. The drives are datacenter SSDs and not desktop. How would I go to set the sector size ? During setup ?
 
Thanks for your input. The drives are datacenter SSDs and not desktop. How would I go to set the se


Very intersting :) During setup you can not . But like @Rinox said, is not important. Important is only that zfs partition to be correct aligned (and it is as I see your output).
 
Hello,

I've stumled upon the same issue when
when installinh Proxmox 6.1 from the ISO on 2 SSD in ZFS RAID 1.
My pveversion is: pve-manager/6.1-7/13e58d5e (running kernel: 5.3.18-2-pve)


It seems I have a simular situation as described above.

I have two SSD disk in ZFS mirror1 and one HW RAID10 LVM manifested as /dev/sda

sda 8:0 0 3.7T 0 disk
└─sda1 8:1 0 3.7T 0 part
├─vmdata-main_tmeta 253:0 0 3.5G 0 lvm
│ └─vmdata-main-tpool 253:2 0 3.5T 0 lvm
│ ├─vmdata-main 253:3 0 3.5T 0 lvm
│ ├─vmdata-vm--101--disk--0 253:4 0 500G 0 lvm
│ ├─vmdata-vm--102--disk--0 253:5 0 300G 0 lvm
│ └─vmdata-vm--103--disk--0 253:6 0 150G 0 lvm
└─vmdata-main_tdata 253:1 0 3.5T 0 lvm
└─vmdata-main-tpool 253:2 0 3.5T 0 lvm
├─vmdata-main 253:3 0 3.5T 0 lvm
├─vmdata-vm--101--disk--0 253:4 0 500G 0 lvm
├─vmdata-vm--102--disk--0 253:5 0 300G 0 lvm
└─vmdata-vm--103--disk--0 253:6 0 150G 0 lvm
sdb 8:16 0 118G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 512M 0 part
└─sdb3 8:19 0 117.5G 0 part
sdc 8:32 0 118G 0 disk
├─sdc1 8:33 0 1007K 0 part
├─sdc2 8:34 0 512M 0 part

I've created VG vmdata and LV main with at most ov availbale space. LV has been converted into thin-pool, upon which three VM has been created.

root@at-automation1:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
main vmdata twi-aotz-- <3.46t 0.51 1.11
vm-101-disk-0 vmdata Vwi-aotz-- 500.00g main 1.56
vm-102-disk-0 vmdata Vwi-aotz-- 300.00g main 1.28
vm-103-disk-0 vmdata Vwi-aotz-- 150.00g main 4.32

After executing fdisk -l command next error appeared for each and every VM:

Disk /dev/mapper/vmdata-vm--101--disk--0: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes
Disklabel type: dos
Disk identifier: 0xb701ff79
Device Boot Start End Sectors Size Id Type
/dev/mapper/vmdata-vm--101--disk--0-part1 * 2048 981467135 981465088 468G 83 Linux
/dev/mapper/vmdata-vm--101--disk--0-part2 981469182 1048573951 67104770 32G 5 Extended
/dev/mapper/vmdata-vm--101--disk--0-part5 981469184 1048573951 67104768 32G 82 Linux swap / Solaris
Partition 2 does not start on physical sector boundary.

Part2 and part3 are exactly the size I've dedicated for RAM for this device ?

I there a way to correct this issue ?

Can I expect a performance impact due to described issue ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!