Good day all,
Hardware Specs:
Dell PowerEdge R630,
Dual (2) Intel Xeon 8 Core E5-2667 v3 CPUs, 3.2 Ghz,
256 Gigabytes Memory,
2x Intel S3610 SSD fo Proxmox OS (raid 1 over PERC H330 SAS RAID Controller),
4x Intel P4510 series 1 Terabyte U.2 format NVMe SSD (VM storage),
front 4 bays configured for PCIe NVMe U.2 SSDs,
Proxmox Setup:
OS runs on 2 Intel S3610 SSD mirrored using PERC H330 RAID Controller,
Research and Expectations:
Since we have 4x NVMe drives, we are looking into creating the fastest possible file system to run our VMs, including DB (Mongodb, mysql, graph, etc.) as well as web servers, applications servers, redis, queue service etc.
We are willing to sacrifice 1 NVMe drive for parity, data redundancy, or for fault tolerance. This will be used in production environment as one of the servers serving ~3M users/month. Assume network is not a bottle neck. Out of scope for this thread.
Documentations Researched Already:
https://forum.proxmox.com/threads/what-is-the-best-file-system-for-proxmox.30228/
https://forum.proxmox.com/threads/which-filesystem-to-use-with-proxmox-sofraid-server.41988/
https://pve.proxmox.com/wiki/ZFS_on_Linux
https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks
few others
Few Findings:
Here is what I did to benchmark
I created 3 Centos KVMs, same Ram, same CPU
A: 1 Centos KVM on mirrored LVM thin where Proxmox OS is installed on Intel S3610 SSD,
B: 1 Centos KVM on zfs raidz1 created on 3 NVMe SSD on Intel P4510 SSD,
To create I used: zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
C: 1 Centos KVM on xfs on single drive mounted to /mnt/nvmedrive on 1 NVMe SSD on Intel P4510 SSD,
Benchmarking Results:
On A - KVM on mirrored LVM thin where Proxmox OS is installed on Intel S3610 SSD:
On B - KVM on zfs raidz1 created on 3 NVMe SSD on Intel P4510 SSD:
On C - KVM on xfs on single drive mounted to /mnt/nvmedrive on 1 NVMe SSD on Intel P4510 SSD:
Confusion:
according to fio tests,
C is fastest by big margin - single nvme drive with xfs fs mounted on /mnt/nvmedrive,
B: Raidz1 on 3 nvme drives is much slower than C, by big margin too,
A is on LVM-thin on same location where OS is installed, over perc mirrored not high performance ssds
I would expect Raidz to perform better in this case. What do you think about our findings? Are these reasonable?
Hardware Specs:
Dell PowerEdge R630,
Dual (2) Intel Xeon 8 Core E5-2667 v3 CPUs, 3.2 Ghz,
256 Gigabytes Memory,
2x Intel S3610 SSD fo Proxmox OS (raid 1 over PERC H330 SAS RAID Controller),
4x Intel P4510 series 1 Terabyte U.2 format NVMe SSD (VM storage),
front 4 bays configured for PCIe NVMe U.2 SSDs,
Proxmox Setup:
OS runs on 2 Intel S3610 SSD mirrored using PERC H330 RAID Controller,
Research and Expectations:
Since we have 4x NVMe drives, we are looking into creating the fastest possible file system to run our VMs, including DB (Mongodb, mysql, graph, etc.) as well as web servers, applications servers, redis, queue service etc.
We are willing to sacrifice 1 NVMe drive for parity, data redundancy, or for fault tolerance. This will be used in production environment as one of the servers serving ~3M users/month. Assume network is not a bottle neck. Out of scope for this thread.
Documentations Researched Already:
https://forum.proxmox.com/threads/what-is-the-best-file-system-for-proxmox.30228/
https://forum.proxmox.com/threads/which-filesystem-to-use-with-proxmox-sofraid-server.41988/
https://pve.proxmox.com/wiki/ZFS_on_Linux
https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks
few others
Few Findings:
Here is what I did to benchmark
I created 3 Centos KVMs, same Ram, same CPU
A: 1 Centos KVM on mirrored LVM thin where Proxmox OS is installed on Intel S3610 SSD,
B: 1 Centos KVM on zfs raidz1 created on 3 NVMe SSD on Intel P4510 SSD,
To create I used: zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
C: 1 Centos KVM on xfs on single drive mounted to /mnt/nvmedrive on 1 NVMe SSD on Intel P4510 SSD,
Benchmarking Results:
On A - KVM on mirrored LVM thin where Proxmox OS is installed on Intel S3610 SSD:
On B - KVM on zfs raidz1 created on 3 NVMe SSD on Intel P4510 SSD:
On C - KVM on xfs on single drive mounted to /mnt/nvmedrive on 1 NVMe SSD on Intel P4510 SSD:
Confusion:
according to fio tests,
C is fastest by big margin - single nvme drive with xfs fs mounted on /mnt/nvmedrive,
B: Raidz1 on 3 nvme drives is much slower than C, by big margin too,
A is on LVM-thin on same location where OS is installed, over perc mirrored not high performance ssds
I would expect Raidz to perform better in this case. What do you think about our findings? Are these reasonable?
Last edited: