Hi All
BACKGROUND
We currently run a PVE 3.4 on a Xeon E3 with 32GB of RAM and 4x 3TB WD RE4 sata drives with MDADM RAID 10 and LVM.
We have a small multi-tenanted operation and we all share this server for a variety of software. A little bit on everything, from Windows File and WSUS server, Jboss/MySQL web application, unifi wifi controller, Pervasive based Accounting system, etc. We have 12 VM's in Total. OS's are Win Server 2008 R2 and 2012 R2 and the rest are Ubuntu 14.04
WHAT IS WORKING WELL
1. Stability, it has been running nearly without interruption for 2 years. reached 250 days of uptime at one point.
2. non-disk performance, applications not needing aggressive disk access, performance is excellent.
3. Backups are pushed to a NFS server on a freenas box on the same lan, but in a different building. both proxmox and traditional backups
WHAT IS NOT WORKING WELL
1. Disk performance. It started out well and the raw disk performance numbers are still good, but the nature of the workload, is causing some performance issues with random workloads. We have had to resize disks quite a bit over the last 2 years, so I suspect fragmentation to be the reason. IO-wait numbers are around 4% average
2. Load average is around 4.0 average, with a peak time average of about 6 to 8.
3. most of my ram is allocated to try to limit the disk requirements. combined VM allocation is about 29GB. currently on 25GB used
FUTURE PLANS
1. I plan to re-install with Proxmox 4.0. A Clean install.
2. I will be backing up all PVE 3.4 VM's to NFS server.
3. The Proxmox will be installed on a dedicated 64GB SSD with EXT4 default partitioning.
4. We are adding 2 more 3TB WD RE4 drives for a total of 6. To add more storage and provide a bit more disk performance.
5. I am considering adding 2 SSD's into the mix to act as ZFS L2Arc and Zil+Slog. more on this configuration below. The SSD's we plan to use are Samsung MZ-7KM480Z480Gb SM863 480GB models. They have a write endurance of 3080 TB with built-in ECC and power loss protection. Along with $300 pricetag in my region. Finally an SSD, I can expect to last 5 years in this sort of workload.
6. I plan to limit the ZFS Arch to 4GB, as that is as much as I can spare without taking RAM away from the VM's. I know I need more for ideal performance, but it the additional RAM will only be availbable in 12 months time.
7. We do plan to replace the board, Processor with Something that will take more RAM at the end of 2016.
PLANNED DISK CONFIGURATION
1. 1x 64GB "consumer" SSD for Proxmox OS and ISO images only
2. 6x 3TB WD RE4 7200RPM disk in ZFS "RAID10".
- Connected to Highpoint PCI-E 8x HBA
3. 2x Samsung SM863 480GB SSDs for L2Arch and Zil+Slog
- connected to onboard SATA via AHCI
- each SSD partitioned into 2 ZFS partitions. 10GB and 470GB
- with a 5 second buffer flush interval and the SSD being capable of a max of 500Mb/s write. I figured that a 10GB partition is about 4 times larger than needed, but I would rather play it safe.
- the 2x 10GB partitions will mirrored and used for Zil+Slog
- the 2x 470GB partitions will be added as cache disks to provide 940GB of L2Arc. I thought that mirroring L2Arc would be unnecessary, as the data is check-summed. I would rather have a larger L2Arc
PLAN B on ZFS
I have a whole week to test the configuration while our offices close in December, so if I am not happy with the ZFS setup, I plan to use the 2 SSD's as mirrored storage for use with some of the VM's, for their OS and Database Virtual disks. In that case, i would probably stick with MDADM and LVM.
QUESTIONS
1. Anything I need to do to the VM's before I take the PVE 3.4 server down? Update the VirtIO drivers on the windows VM's? Is there a good guide somewhere for how to do it?
2. Anything I will be losing by going from LVM to ZFS from a Proxmox point of view?
3. does Proxmox 4 use Zvol's by default, or does it store ?
4. Should I consider formatting the ZFS block devices with EXT4 or XFS, so I can use Qcow2 disk images? Some data seems to indicate that the performance is still great and the flexibility of Qcow2 is worth the extra complexity.
5. ZFS litrature seems to recommend always enabling LZ4 compression by default. Is there any benefit in a system where disk space is pre-allocated?
REFERENCE MATERIAL
https://blogs.oracle.com/brendan/entry/test
https://clinta.github.io/FreeNAS-Multipurpose-SSD/
https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/
SHOOT AWAY!, any input, experience, or even questions would be appreciated.
BACKGROUND
We currently run a PVE 3.4 on a Xeon E3 with 32GB of RAM and 4x 3TB WD RE4 sata drives with MDADM RAID 10 and LVM.
We have a small multi-tenanted operation and we all share this server for a variety of software. A little bit on everything, from Windows File and WSUS server, Jboss/MySQL web application, unifi wifi controller, Pervasive based Accounting system, etc. We have 12 VM's in Total. OS's are Win Server 2008 R2 and 2012 R2 and the rest are Ubuntu 14.04
WHAT IS WORKING WELL
1. Stability, it has been running nearly without interruption for 2 years. reached 250 days of uptime at one point.
2. non-disk performance, applications not needing aggressive disk access, performance is excellent.
3. Backups are pushed to a NFS server on a freenas box on the same lan, but in a different building. both proxmox and traditional backups
WHAT IS NOT WORKING WELL
1. Disk performance. It started out well and the raw disk performance numbers are still good, but the nature of the workload, is causing some performance issues with random workloads. We have had to resize disks quite a bit over the last 2 years, so I suspect fragmentation to be the reason. IO-wait numbers are around 4% average
2. Load average is around 4.0 average, with a peak time average of about 6 to 8.
3. most of my ram is allocated to try to limit the disk requirements. combined VM allocation is about 29GB. currently on 25GB used
FUTURE PLANS
1. I plan to re-install with Proxmox 4.0. A Clean install.
2. I will be backing up all PVE 3.4 VM's to NFS server.
3. The Proxmox will be installed on a dedicated 64GB SSD with EXT4 default partitioning.
4. We are adding 2 more 3TB WD RE4 drives for a total of 6. To add more storage and provide a bit more disk performance.
5. I am considering adding 2 SSD's into the mix to act as ZFS L2Arc and Zil+Slog. more on this configuration below. The SSD's we plan to use are Samsung MZ-7KM480Z480Gb SM863 480GB models. They have a write endurance of 3080 TB with built-in ECC and power loss protection. Along with $300 pricetag in my region. Finally an SSD, I can expect to last 5 years in this sort of workload.
6. I plan to limit the ZFS Arch to 4GB, as that is as much as I can spare without taking RAM away from the VM's. I know I need more for ideal performance, but it the additional RAM will only be availbable in 12 months time.
7. We do plan to replace the board, Processor with Something that will take more RAM at the end of 2016.
PLANNED DISK CONFIGURATION
1. 1x 64GB "consumer" SSD for Proxmox OS and ISO images only
2. 6x 3TB WD RE4 7200RPM disk in ZFS "RAID10".
- Connected to Highpoint PCI-E 8x HBA
3. 2x Samsung SM863 480GB SSDs for L2Arch and Zil+Slog
- connected to onboard SATA via AHCI
- each SSD partitioned into 2 ZFS partitions. 10GB and 470GB
- with a 5 second buffer flush interval and the SSD being capable of a max of 500Mb/s write. I figured that a 10GB partition is about 4 times larger than needed, but I would rather play it safe.
- the 2x 10GB partitions will mirrored and used for Zil+Slog
- the 2x 470GB partitions will be added as cache disks to provide 940GB of L2Arc. I thought that mirroring L2Arc would be unnecessary, as the data is check-summed. I would rather have a larger L2Arc
PLAN B on ZFS
I have a whole week to test the configuration while our offices close in December, so if I am not happy with the ZFS setup, I plan to use the 2 SSD's as mirrored storage for use with some of the VM's, for their OS and Database Virtual disks. In that case, i would probably stick with MDADM and LVM.
QUESTIONS
1. Anything I need to do to the VM's before I take the PVE 3.4 server down? Update the VirtIO drivers on the windows VM's? Is there a good guide somewhere for how to do it?
2. Anything I will be losing by going from LVM to ZFS from a Proxmox point of view?
3. does Proxmox 4 use Zvol's by default, or does it store ?
4. Should I consider formatting the ZFS block devices with EXT4 or XFS, so I can use Qcow2 disk images? Some data seems to indicate that the performance is still great and the flexibility of Qcow2 is worth the extra complexity.
5. ZFS litrature seems to recommend always enabling LZ4 compression by default. Is there any benefit in a system where disk space is pre-allocated?
REFERENCE MATERIAL
https://blogs.oracle.com/brendan/entry/test
https://clinta.github.io/FreeNAS-Multipurpose-SSD/
https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/
SHOOT AWAY!, any input, experience, or even questions would be appreciated.
Last edited: