[SOLVED] logical/physical block size on 860 PRO reports 512 (4k expected)

Jul 14, 2020
3
0
6
36
hi all,

I'm testing a new server for proxmox (zfs) and have 4x samsung 860PRO 512GB SSDs there.
all the documentation i found seem to say those disks have 4k physical block size.

yet all the tools (fdisk, hdparm, parted, stat etc..) report 512 for both physical and logical.
maybe some has experience with the same disks and can confirm if they are 4K native or not?

and if indeed it's 512, do you have any recommendations for volblocksize/ashift/compression on the pool? And if ZFS still uses the smallest block available regardless of what you set on the pool creation, is it even possible to trick zvol (or dataset) to use 4K minimum?
(initial idea for the server is gitlab-runners, both LXCs and VMs. so cpu heavy and disk write heavy)

p.s. now i just have default:
ashift=12
volblocksize for VMs a default 8K..(which i've seen not to be the best choice for sure)

thank you very much!

Code:
root@revolver:~# pveversion -v

proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754)
pve-kernel-5.4: 6.2-4
pve-kernel-helper: 6.2-4
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.44-1-pve: 5.4.44-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-3
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-8
pve-cluster: 6.1-8
pve-container: 3.1-8
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

root@revolver:~# zpool get ashift
NAME   PROPERTY  VALUE   SOURCE
rpool  ashift    12      local

root@revolver:~# zfs get volblocksize
NAME                      PROPERTY      VALUE     SOURCE
rpool                     volblocksize  -         -
rpool/ROOT                volblocksize  -         -
rpool/ROOT/pve-1          volblocksize  -         -
rpool/data                volblocksize  -         -
rpool/data/vm-100-disk-0  volblocksize  8K        default
 
Hi,

I do have use 2 pcs of them, used only for slog and l2arc. After 1-2 weeks the life of each of them has lost 2 or 4 %. So I think this SSD can not be used for zfs

Good luck / Bafta !
 
Interesting.. as they are running fine on another production server and I have not seen the same degradation levels as you have..
Unless your usage is crazy..
But for the ones you use for slog and l2arc, are they also showing 512 as a sector size ?
 
I haven't tried this yet, but in theory, you can watch your smart LBA written and write one bit. The difference should be in the ballpark of your sector size. If not, you may have write amplification. Most modern SSDs have at least an internal blocksize of 8K, most of the time even more. The firmware is stripping away those details for you.
 
got a confirmation from samsung that 860 PRO uses 512 as a physical sector size.
and they'll send me a list of disks that are actually 4Kn.

don't know how much it will actually affect my user's experience, but since i have time to optimize new proxmox server as much as possible, every little helps:)
thanks everyone for your answers!