pvetest updates and RC ISO installer

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,740
223
Hi all,

We just updated the pvetest repository and uploaded a lot of latest packages required to support ZFS on Linux.

Also note that we have downgraded pve-qemu-kvm from 2.2 to 2.1, because live migration was unstable on some hosts. So please downgrade that package manually (using and wget .. and dpkg -i ..) if you already use the 2.2 version from pvetest.

The repository includes latest grub packages (required for zfs), so the upgrade will ask you to re-install the grub boot sector to the disks. Please report any problems with that.

And we have a brand new release candidate of the ISO installer, supporting ZFS RAID with just a few clicks.
http://download.proxmox.com/iso/

Documentation
http://pve.proxmox.com/wiki/ZFS

All feedback is very welcome!

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
no 12 is the shift of bit 2^12 = 4096!
 
ashift=12 should only be used on advanced formatted disks. Most enterprise disks and all SSD disks I know off still uses 512b which means ashift=9. The only situation where you should need manually setting ashift is for disks announcing 512e or 512/4096.
 
Last edited:
ashift=12 should only be used on advanced formatted disks. Most enterprise disks and all SSD disks I know off still uses 512b which means ashift=9. The only situation where you should need manually setting ashift is for disks announcing 512e or 512/4096.

AFAIK most disk use some kind of emulation for 512b sectors, so using ashift=12 should not really harm?
 
AFAIK most disk use some kind of emulation for 512b sectors, so using ashift=12 should not really harm?
This is only true for new consumer SATA disks. For SAS disks this is mostly wrong and for SSD this is clearly wrong. If you apply ashift=12 to a disk using 512b sectors only it will mean a big performance penalty. ashift refers to the physical sector layout and not the logical sector layout so you should only use ashift if disk reports different values for logical and physical sector layout and in this case ashift must reflect the physical sector layout.

As written before: Only manually use the ashift setting if you are absolutely sure that you know better.
 
Last edited:
Also another thing. If you use disks with different sector layout in a pool ashift must follow the lowest common denominator since ashift is applied on the pool and not the individual disks. When the pool is created you cannot change the value for ashift so choose it wisely.
 
This article explains it very well and includes benchmarks as well as explaining how to find the physical sector size of a disk: http://www.ibm.com/developerworks/library/l-linux-on-4kb-sector-disks/index.html

Some findings here:
Old SATA 1TB disk:
Code:
$ sudo parted /dev/sda
GNU Parted 3.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit s                                                           
(parted) print                                                            
Model: ATA SAMSUNG HD103SI (scsi)
Disk /dev/sda: 1953525168s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
New SATA 2TB disk:
Code:
sudo parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit s                                                           
(parted) print                                                            
Model: ATA ST2000DM001-1CH1 (scsi)
Disk /dev/sdb: 3907029168s
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags:
New SSD disk:
Code:
sudo parted /dev/sdc
GNU Parted 3.2
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit s                                                           
(parted) print                                                            
Model: ATA INTEL SSDSC2CT12 (scsi)
Disk /dev/sdc: 234441648s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
 
Last edited:
This is only true for new consumer SATA disks. For SAS disks this is mostly wrong and for SSD this is clearly wrong.

AFAIK SSD use even bigger sector sizes internally.

If you apply ashift=12 to a disk using 512b sectors only it will mean a big performance penalty. ashift refers to the physical sector layout and not the logical sector layout so you should only use ashift if disk reports different values for logical and physical sector layout and in this case ashift must reflect the physical sector layout.

As written before: Only manually use the ashift setting if you are absolutely sure that you know better.

OK, so I should change that in the installer. Does somebody sell real 4K disk currently? If so, we need to make
it configurable.

Although I get best performance with 4K sectors in my test. What benchmarks do you use to show that
512b sectors are faster?
 
Does somebody sell real 4K disk currently? If so, we need to make
it configurable.
Hi Dietmar,
some on my HGST 4TB drives shows 4k sectors. The funny thing is, that the new models doesn't show it.

The "old" one
Code:
scsi 0:0:6:0: Direct-Access     ATA      HGST HUS724040AL A580 PQ: 0 ANSI: 6

root@ceph-02:/home/ceph# parted /dev/sdg
GNU Parted 2.3
Using /dev/sdg
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Model: ATA HGST HUS724040AL (scsi)
Disk /dev/sdg: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start   End     Size    File system  Name       Flags
 1      1049kB  4001GB  4001GB  ext4         ceph data
The new one:
Code:
scsi 0:0:7:0: Direct-Access     ATA      HGST HUS724040AL AA70 PQ: 0 ANSI: 6

root@ceph-06:/home/ceph# parted /dev/sdh
GNU Parted 2.3
Using /dev/sdh
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Model: ATA HGST HUS724040AL (scsi)
Disk /dev/sdh: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name          Flags
 2      1049kB  10,5GB  10,5GB               ceph journal
 1      10,5GB  4001GB  3990GB  ext4         ceph data
The reseller has an slightly different name for the disk.

Udo
 
"AFAIK SSD use even bigger sector sizes internally."
I guess the internal size is a secret;-) None the less the firmware is optimized for 512B blocks so using any other block size in best case will not improve anything.

"
Although I get best performance with 4K sectors in my test. What benchmarks do you use to show that
512b sectors are faster?"
I haven't done any benchmarks myself since I trust the guru's when they claim that ashift for a pool should be equal to the physical sector size of the disks. I guess the reason for having ashift equal physical sector size is some internal optimization inside ZFS.

Some in depth knowledge here:
http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks
 
Last edited:
I moved one of my data center with ceph and glusterfs as NFS,
Even the Ceph is faster between servers.
The kernel 3.10.0-7 still not boot on the ASUS KGPE-D16!!!!

proxmove-3.4-rc1-Ceph-cluster.pngProxmoxve-3.4-rc1-1.pngProxmoxve-3.4-rc1-ceph-sds.pngProxmoxve-3.4-rc1-glusterfs.png
Thnk you very much.
 
Hi all,
We just updated the pvetest repository and uploaded a lot of latest packages required to support ZFS on Linux.

sounds very cool, but how is this going to work in the future? will it be the next default/optional/mandatory pve fs, is it thought to replace hardware raid for pve nodes, what else?
please give me a broad view picture to understand why it's been tested,

I'm soo n00b in zfs... sorry

Marco
 
sounds very cool, but how is this going to work in the future? will it be the next default/optional/mandatory pve fs,

zfs is optional (it use much more resources than ext3, ext4 or xfs).

is it thought to replace hardware raid for pve nodes, what else?

Yes, the plan it to have an optional setup without HW RAID. If you use a fast SSD
as cache, such setups can be really fast, reliable and easy to maintain.
 
Great news. ZFS is a very nice addition, but I was wondering if it's recommended to put OpenVZ containers on ZFS volumes. Putting every container on its own dataset would ease management, backups, etc a lot. Would this be a supported situation?
 
Great news. ZFS is a very nice addition, but I was wondering if it's recommended to put OpenVZ containers on ZFS volumes. Putting every container on its own dataset would ease management, backups, etc a lot. Would this be a supported situation?

I do it manually. For automatically need to patch vzctl becouse it report error creating in exiting path of zfs volume.
 
Doing it manually is fine. It was more like a technical question, since the recommended and supported FS is ext3/4 and while many others also use OVZ on xfs, ZoL + OVZ is very rare, as far as I can tell. Do you have positive experience with this combination? Any caveats?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!