Need advice & your kind help - zfs Raid1 - nvme 2drives - Disk performance confusion?

sandes

New Member
Nov 1, 2020
2
0
1
32
Hello friends,

Please find the 2 Attachments. [LEFT PUTTY = VM1st(ny) & RIGHT PUTTY = VM2nd(555) ]

My Leased SERVER DETAILS:

it's 6 Cores and 12 Threads intel CPU
32 GB RAM
2 NVME Drive 512GB
-
PROXMOX NODE Details.

- During setup Selected Zfs Raid1 for 2x Nvme drive (mirroring purpose)
ARC limited to 6 GB
No Swap created during Installation.

Now Main issues and confusions I'm having is that.

I created 2 Virtual Machines.

1) VM = named (NY) - Created 1 socket 3 cores .. RAM = 7 GB, Installed Centos 7 with default Partitioning. (which is lvm)
Selected Disk = zfs - Raw format - No Cache default.

2nd) VM= named (555) Created 1 socket 3 cores.. RAM = 7 GB, Installed Centos 7 with custom partitioning (xfs and standard)
Selected Disk = qcow2 format - No Cache default.. ( I created as directory /tanky on datacenter and choose this option instead of ZFS to allot the disk space for this VM-555)

Confusion #1: I ran several DISK BENCHMARK TEST on both VMs and I found that I'm having odd results. on VM.. I think DISK PERFORMANCE IS BETTER ON my 2nd VM named 555 in some benchmarks WHY? Due to I choose qcow2 and created new directory instead of ZFS ??

Confusion # 2: Since I used zfs Raid1 for mirroring and redundancy purpose..

But for my 2nd VM I didn't go with default zfs (raw format) and instead I created NEW directory location (path = /tanky) -- So VMs or DATA in Non ZPOOL Directory will be still get mirrored ? Or Zfs raw format with default location will be mirror only?


PLEASE HELP and Suggest me Which one is good to go and kindly check the both attachments. [LEFT PUTTY = VM1st(ny) & RIGHT PUTTY = VM2nd(555) ]

My real concern is DISK PEFROMANCE for my Nginix and Apache Server (CentOS Web Panel) Application AND DATA Redundancy ?


Thank You very much for you attention,

Peace
Sandes
 

Attachments

  • Zfs+default-centos-VS-directory-install+XFS-custom-partitioning-DISK-PERFORMANCE-RESULTS.PNG
    Zfs+default-centos-VS-directory-install+XFS-custom-partitioning-DISK-PERFORMANCE-RESULTS.PNG
    170.3 KB · Views: 8
  • Zfs+default-centos-VS-directory-install+XFS-custom-partitioning-DISK-PERFORMANCE-RESULTS-PART-2.PNG
    Zfs+default-centos-VS-directory-install+XFS-custom-partitioning-DISK-PERFORMANCE-RESULTS-PART-2.PNG
    89.2 KB · Views: 8
Last edited:
BTW the two Nvme drives are Samsung 970 Pro and from where I got the monthly dedicated server they stated it's Enterprise Nvme Drives.

But on Samsung website I didn't find Samsung 970 Pro under Enterprise drives list.

Please brother assist me.

and What ashift value will be good? currently it's ashift=12 but when I Google about Samsung 970 Pro it's 512 bytes and hdpharm also state its 512 physically supported ...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!