Test Results for Building VM on Target Storage NFS, ISCSI, SMB, LOCAL SSD, SLOG and NO SLOG.

Zbos1

New Member
Sep 11, 2024
1
0
1
**Installing Ubuntu Server 22.04 VM: Performance Test Results**

---

### **Proxmox Host Specifications**

- **Proxmox Host**: Dell R220
- **Memory**: 32GB DDR3 RAM
- **Proxmox Storage**: 480GB consumer SSD (LVM-thin, boot, and LVM-thin all on the same drive)
- **Network**: 10G NIC with LC fiber connected to a 10G switch

---

### **My Contribution:**

I wanted to share these results to help others, especially newbies, understand the expected outcomes when using used enterprise equipment for a similar setup.

Here’s the hardware I used:

- **Dell R220** with 32GB DDR3, running Proxmox on a 480GB consumer SSD (LVM-thin). The boot and LVM-thin were on the same drive.
- **Network**: 10G NIC with LC fiber connected to a 10G switch.
- **Storage Target**: TrueNAS Scale running bare metal on a Dell R220.
- Additionally, I tested **TrueNAS Scale** on a **Dell R230**, which allowed me to use a 16GB Optane SSD on the PCIe slot along with the 10G NIC.

This demonstrates the benefits of using a SLOG (Separate Log Device), particularly for write-heavy, synchronous operations like NFS. I recommend using an enterprise-grade NVMe or SSD for the SLOG device to avoid potential bottlenecks.

I installed a VM on the Proxmox host running a basic install of **Ubuntu 22.04 server**. For iSCSI, the OS image came from the local SSD, while the VM image was stored on the target storage.

This is aimed at newbies with a realistic budget to get up and running without spending too much money. I may have left out some details, but you get the idea. All settings on TrueNAS Scale were left at default, and nothing was altered from the stock configuration on each dataset.

**iSCSI Setup**: iSCSI was set up using LVM over iSCSI on Proxmox, allowing users to manage everything through the GUI without needing to touch the CLI.

---

### **Test Setup on Dell R220 (Striped Array)**

- **Tested On**: Dell R220
- **Host System**: TrueNAS Scale
- **Memory**: 16GB RAM
- **Storage Devices**: 2 x 3TB Toshiba P300 7200RPM CMR Drives in RAID 0 (Stripe)
- **Network**: 10G NIC

> *I deliberately ran the Dell R220 on TrueNAS in RAID 0 (Stripe) since it only has two drives and I needed to demonstrate a decent read/write performance.*

---

### **Performance Results on Dell R220:**

Code:

```
Storage TypeTime (no SLOG)Time (with SLOG)Improvement (%)
SMB2 minutes 25 secondsn/an/a
iSCSI2 minutes 10 secondsn/an/a
NFS4 minutes 44 secondsn/an/a
Local SSD1 minute 47 secondsn/an/a
```

---

### **Test Setup on Dell R230 (Striped Mirrored Array)**

- **Tested On**: Dell R230
- **Host System**: TrueNAS Scale
- **Memory**: 48GB RAM
- **Storage Devices**: 4 x 3TB Toshiba P300 7200RPM CMR Drives in RAID 10 (Striped Mirror)
- **Network**: 10G NIC
- **Additional Storage**: 16GB Optane SLOG drive for NFS, iSCSI, and SMB

> *The Dell R230 had 4 drives configured in RAID 10 (Striped Mirror) on TrueNAS to show real-world expectations. This also demonstrates how much of a difference a SLOG can make in write-heavy, synchronous operations.*

---

### **Performance Results on Dell R230:**

Code:

```
Storage TypeTime (no SLOG)Time (with SLOG)Improvement (%)
SMB2 minutes 10 seconds1 minute 55 seconds11.5%
iSCSI2 minutes 41 seconds1 minute 43 seconds35.7%
NFS8 minutes 58 seconds1 minute 55 seconds78.6%
Local SSD1 minute 26 secondsn/an/a
```

---

### **Analysis of SLOG Impact on Performance:**

The introduction of the 16GB Optane SLOG drive made a significant difference, especially for NFS. Here’s what I observed:

1. **NFS Performance**: NFS, which heavily relies on synchronous writes, saw a massive improvement in installation time—dropping from 8 minutes 58 seconds to 1 minute 55 seconds (a 78.6% improvement).

2. **iSCSI Performance**: While iSCSI doesn’t rely as much on synchronous writes, it still benefited from the SLOG, with performance improving from 2 minutes 41 seconds to 1 minute 43 seconds (a 35.7% improvement).

3. **SMB Performance**: SMB, which benefits from client-side caching, saw the smallest improvement, dropping from 2 minutes 10 seconds to 1 minute 55 seconds (an 11.5% improvement).

---

### **Conclusion:**

Using an Optane SLOG drive significantly boosts performance, particularly in a RAID 10 setup where write-heavy workloads like NFS are common. iSCSI also saw notable improvements, while SMB saw minor gains.

Understanding your storage needs and choosing the right protocols can help optimize performance, especially in setups with used enterprise equipment.

---

**Final Notes**:
- All tests were run using default settings on TrueNAS Scale.
- No CLI was required for either Proxmox or TrueNAS to achieve this configuration—everything was handled via the GUI.

---

**Related Topics**:
- SLOG devices in ZFS
- TrueNAS Scale configurations
- Proxmox storage setup

---

Hope this helps others looking to optimize their storage setup! Let me know if you need more details!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!