ZFS: Langsame Schreibleistung

Nov 10, 2022
3
0
1
Hallo zusammen,

ich bin Proxmox-Neuling, habe es aber dank diverser (YouTube-)Tutorials geschafft, das System erfolgreich auf einem NUC 12 zu installieren. Direkt im Anschluss habe ich eine VM mit Windows Server 2022 aufgesetzt, das ging auch ohne Probleme. Was mir allerdings aufgefallen ist, sind die "schlechten" IO-Werte meiner SSD im Vergleich zu einer Bare-Metal-Installation von Windows Server.

Die VM liegt auf einer NVMe-SSD, die Lese- und Schreibraten von bis zu 7.300 MB/s erreichen kann - und dies ohne Proxmox auch annähernd tut. In der VM spuckt mir CrystalDiskMark jedoch deutlich schlechtere Werte aus, vor allem beim Schreiben.

Benchmark Bare Metal:

bare-metall.jpg

Benchmark Proxmox-VM:

proxmox.jpg

Ich nutze ZFS mit allen Tweaks, die ich so finden konnte (zpool set autotrim=on, zfs set xattr=sa, zfs set atime=off, zfs set sync=disabled), VirtIO als SCSI-Controller sowie VirtIO Block als Bus für die Disk; Cache ist aus. Eine testweise erstellte zweite VM mit SCSI als Block brachte keine Verbesserung.

Ist das normal und ich muss damit leben? Ich wusste natürlich vorher, dass ich bei ZFS zugunsten anderer Features auf Performance verzichte, aber das ist mir dann doch ein bisschen arg. Gibt es noch Stellschrauben, an denen ich drehen kann?

Hier noch meine Pool-Konfiguration. Auf rpool ist bloß Proxmox installiert, die VMs laufen auf dem Pool vms.

Bash:
root@proxmox:~# zpool get all
NAME   PROPERTY                       VALUE                          SOURCE
rpool  size                           888G                           -
rpool  capacity                       0%                             -
rpool  altroot                        -                              default
rpool  health                         ONLINE                         -
rpool  guid                           15498355550547173204           -
rpool  version                        -                              default
rpool  bootfs                         rpool/ROOT/pve-1               local
rpool  delegation                     on                             default
rpool  autoreplace                    off                            default
rpool  cachefile                      -                              default
rpool  failmode                       wait                           default
rpool  listsnapshots                  off                            default
rpool  autoexpand                     off                            default
rpool  dedupratio                     1.00x                          -
rpool  free                           882G                           -
rpool  allocated                      6.35G                          -
rpool  readonly                       off                            -
rpool  ashift                         12                             local
rpool  comment                        -                              default
rpool  expandsize                     -                              -
rpool  freeing                        0                              -
rpool  fragmentation                  0%                             -
rpool  leaked                         0                              -
rpool  multihost                      off                            default
rpool  checkpoint                     -                              -
rpool  load_guid                      12062635733350499484           -
rpool  autotrim                       on                             local
rpool  compatibility                  off                            default
rpool  feature@async_destroy          enabled                        local
rpool  feature@empty_bpobj            active                         local
rpool  feature@lz4_compress           active                         local
rpool  feature@multi_vdev_crash_dump  enabled                        local
rpool  feature@spacemap_histogram     active                         local
rpool  feature@enabled_txg            active                         local
rpool  feature@hole_birth             active                         local
rpool  feature@extensible_dataset     active                         local
rpool  feature@embedded_data          active                         local
rpool  feature@bookmarks              enabled                        local
rpool  feature@filesystem_limits      enabled                        local
rpool  feature@large_blocks           enabled                        local
rpool  feature@large_dnode            enabled                        local
rpool  feature@sha512                 enabled                        local
rpool  feature@skein                  enabled                        local
rpool  feature@edonr                  enabled                        local
rpool  feature@userobj_accounting     active                         local
rpool  feature@encryption             enabled                        local
rpool  feature@project_quota          active                         local
rpool  feature@device_removal         enabled                        local
rpool  feature@obsolete_counts        enabled                        local
rpool  feature@zpool_checkpoint       enabled                        local
rpool  feature@spacemap_v2            active                         local
rpool  feature@allocation_classes     enabled                        local
rpool  feature@resilver_defer         enabled                        local
rpool  feature@bookmark_v2            enabled                        local
rpool  feature@redaction_bookmarks    enabled                        local
rpool  feature@redacted_datasets      enabled                        local
rpool  feature@bookmark_written       enabled                        local
rpool  feature@log_spacemap           active                         local
rpool  feature@livelist               enabled                        local
rpool  feature@device_rebuild         enabled                        local
rpool  feature@zstd_compress          enabled                        local
rpool  feature@draid                  enabled                        local
vms    size                           928G                           -
vms    capacity                       3%                             -
vms    altroot                        -                              default
vms    health                         ONLINE                         -
vms    guid                           3312461236419923309            -
vms    version                        -                              default
vms    bootfs                         -                              default
vms    delegation                     on                             default
vms    autoreplace                    off                            default
vms    cachefile                      -                              default
vms    failmode                       wait                           default
vms    listsnapshots                  off                            default
vms    autoexpand                     off                            default
vms    dedupratio                     1.00x                          -
vms    free                           897G                           -
vms    allocated                      31.0G                          -
vms    readonly                       off                            -
vms    ashift                         12                             local
vms    comment                        -                              default
vms    expandsize                     -                              -
vms    freeing                        0                              -
vms    fragmentation                  0%                             -
vms    leaked                         0                              -
vms    multihost                      off                            default
vms    checkpoint                     -                              -
vms    load_guid                      2685839280608218036            -
vms    autotrim                       on                             local
vms    compatibility                  off                            default
vms    feature@async_destroy          enabled                        local
vms    feature@empty_bpobj            active                         local
vms    feature@lz4_compress           active                         local
vms    feature@multi_vdev_crash_dump  enabled                        local
vms    feature@spacemap_histogram     active                         local
vms    feature@enabled_txg            active                         local
vms    feature@hole_birth             active                         local
vms    feature@extensible_dataset     active                         local
vms    feature@embedded_data          active                         local
vms    feature@bookmarks              enabled                        local
vms    feature@filesystem_limits      enabled                        local
vms    feature@large_blocks           enabled                        local
vms    feature@large_dnode            enabled                        local
vms    feature@sha512                 enabled                        local
vms    feature@skein                  enabled                        local
vms    feature@edonr                  enabled                        local
vms    feature@userobj_accounting     active                         local
vms    feature@encryption             enabled                        local
vms    feature@project_quota          active                         local
vms    feature@device_removal         enabled                        local
vms    feature@obsolete_counts        enabled                        local
vms    feature@zpool_checkpoint       enabled                        local
vms    feature@spacemap_v2            active                         local
vms    feature@allocation_classes     enabled                        local
vms    feature@resilver_defer         enabled                        local
vms    feature@bookmark_v2            enabled                        local
vms    feature@redaction_bookmarks    enabled                        local
vms    feature@redacted_datasets      enabled                        local
vms    feature@bookmark_written       enabled                        local
vms    feature@log_spacemap           active                         local
vms    feature@livelist               enabled                        local
vms    feature@device_rebuild         enabled                        local
vms    feature@zstd_compress          active                         local
vms    feature@draid                  enabled                        local

Ich danke im Voraus für etwaige Tipps.
 
Last edited:
Which manucfacturer and type is your NVMe? I doubt those values (in both screenshots) are real.

Basically Crystal is reporting your RAM speed, not the speed of the storage device. Search for "Crystaldiskmark" here in the forum - it has been mentioned several times. Use a tool like fio instead...

"zfs set sync=disabled" - so you WANT to measure the speed of your Ram? To benchmark the storage device one usually does everything to disable[ any caches...
 
  • Like
Reactions: flames
It's a Seagate FireCuda 530. I'm going to benchmark the disk with fio tomorrow, but what remains is the huge discrepancy between bare metal and VM, absolute numbers aside.

sync=disabled was only turned on temporarily and for testing purposes because I read it can have a big impact on (write) performance.
 
Seagate FireCuda 530 SSD -> Consumer SSD

FAQ from official Proxmox ZFS Benchmark Paper page 8:
Can I use consumer or pro-sumer SSDs, as these are much cheaper than enterprise-class SSDs?
No. Never. These SSDs wont provide the required performance, reliability or endurance. See the fio results from before and/or run your own fio tests.

ZFS needs to do some sync writes and SSDs without a powerloss protection (no consumer SSD will have that) will deliver crappy sync write performance. And I wouldn't set "sync=disabled". Sync writes are only used by software developers when its really needed, as a failure would be catastrophal. By forcing to handle all sync writes as async writes you might lose you entire pool in case of a kernel crash, hardware failure or power outage. If you care about your data you should never disable sync writes. And if you don'T care about your data, there is no point choosing ZFS, when there are other options performing better but with worse data integrity.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!