Nexenta/Illumos-based slow

Admiral

Member
Feb 23, 2015
28
0
21
Has anyone managed to get this working? With or withour Virtio disks/nic's it just takes ages to get into the webinterface
CPU : AMD FX8320 , 16 Gigs of ram
pveversion -v
proxmox-ve: 4.0-16 (running kernel: 4.2.2-1-pve)
pve-manager: 4.0-50 (running version: 4.0-50/d3a6b7e5)
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-23
qemu-server: 4.0-31
pve-firmware: 1.1-7
libpve-common-perl: 4.0-32
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-27
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-10
pve-container: 1.0-10
pve-firewall: 2.0-12
pve-ha-manager: 1.0-10
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.3-1
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
criu: 1.6.0-1
zfsutils: 0.6.5-pve4~jessie
 
I want to make a Nexenta or at least Illumos-based ZFS storage which i intend to pass back to proxmox over iSCSI.
The VM is really slow , especially noticeable when connecting to the webinterface of that VM.
It has been configured with 3 Vcpu's and 6 Gigs ram for now , my ZFS array will be attached with PCI-passthrough as soon as these problems can be solved
Tried switching ACPI on or af ,added/disabled the hpet-switch , did install on both ide/virtio based setup's , so I am clueless now
 
As I understand it you want to virtualize a storage server in Proxmox and use this virtualized storage to provide storage to VM in the same Proxmox cluster? This is bound to fail and should be avoided - it is like feeding the dog with its own tail!
 
This used to work in an esxi setup before though , with quite fast performance.
But this remark doesn't solve my problem though :)

Why is Solaris this slow on my setup? :) Even without the intention to make it a storage VM , the performance is really bad.
 
I can't speak for Nexenta but latest Omnios runs very well here - only for testing and development though. What is the contents of your VM's config file (/etc/pve/qemu-server/vmid.conf)?
 
@ the moment I am @ work so I don't have access .

I can give you the details that I remember :

3 sockets with 1 vcpu ( Could not get it to work with 1 socket and 3 Vcpu's)
Virtio raw disk 20 gig , writeback.
Virtio NIC.
ACPI Off , KVM 1 , Solaris OS type. 6 Gigs ram.
Cputype = host
scsihw = virtio-scsi-pci
 
If you use virtio-scsi-pci why not use SCSI disk instead of virtio? (SCSI disk and virtio-scsi-pci provides trim support)
Why ACPI off? (you will not be able to shutdown from GUI)

What OS do you run?

Mine is as told before Omnios 151014:
Code:
balloon: 2048
boot: cdn
bootdisk: virtio0
cores: 1
cpu: Opteron_G5
ide2: none,media=cdrom
memory: 4096
name: omnios-151014
net0: virtio=98:DB:33:D9:EE:78,bridge=vmbr300
numa: 0
ostype: solaris
sockets: 2
tablet: 0
vga: qxl
virtio0: omnios_ib:vm-105-disk-1,size=12G
virtio1: omnios_ib_nfs:105/vm-105-disk-1.qcow2,format=qcow2,size=32G
virtio2: omnios_ib_nfs:105/vm-105-disk-4.qcow2,format=qcow2,size=32G
Running fio with iometer access pattern (zfs mirror):
# This job file tries to mimic the Intel IOMeter File Server Access Pattern
[global]
description=Emulation of Intel IOmeter File Server Access Pattern

[iometer]
bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10
rw=randrw
rwmixread=80
direct=0
fallocate=none
size=4g
ioengine=solarisaio
#ioengine=posixaio
# IOMeter defines the server loads as the following:
# iodepth=1 Linear
# iodepth=4 Very Light
# iodepth=8 Light
# iodepth=64 Moderate
# iodepth=256 Heavy
iodepth=64
Code:
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=solarisaio, iodepth=64
fio-2.2.4
Starting 1 process
Jobs: 1 (f=1): [m(1)] [100.0% done] [12991KB/3081KB/0KB /s] [3050/725/0 iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=681: Mon Oct 19 01:43:57 2015
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
  read : io=3270.8MB, bw=9636.2KB/s, iops=1576, runt=347563msec
    slat (usec): min=4, max=667866, avg=94.73, stdev=1462.62
    clat (usec): min=4, max=1026.1K, avg=5787.98, stdev=23675.15
     lat (usec): min=29, max=1026.2K, avg=5882.71, stdev=23745.05
    clat percentiles (usec):
     |  1.00th=[    5],  5.00th=[    5], 10.00th=[    5], 20.00th=[    7],
     | 30.00th=[   33], 40.00th=[   38], 50.00th=[   49], 60.00th=[  113],
     | 70.00th=[  596], 80.00th=[ 1896], 90.00th=[ 9920], 95.00th=[30336],
     | 99.00th=[117248], 99.50th=[152576], 99.90th=[264192], 99.95th=[325632],
     | 99.99th=[692224]
    bw (KB  /s): min= 1694, max=29388, per=100.00%, avg=9673.20, stdev=3394.23
  write: io=845123KB, bw=2431.6KB/s, iops=394, runt=347563msec
    slat (usec): min=6, max=162637, avg=99.81, stdev=1141.78
    clat (usec): min=5, max=1371.6K, avg=138465.63, stdev=79850.93
     lat (usec): min=75, max=1371.7K, avg=138565.44, stdev=79828.69
    clat percentiles (msec):
     |  1.00th=[    3],  5.00th=[   13], 10.00th=[   38], 20.00th=[   91],
     | 30.00th=[  106], 40.00th=[  121], 50.00th=[  133], 60.00th=[  147],
     | 70.00th=[  161], 80.00th=[  180], 90.00th=[  221], 95.00th=[  273],
     | 99.00th=[  408], 99.50th=[  482], 99.90th=[  685], 99.95th=[  816],
     | 99.99th=[  996]
    bw (KB  /s): min=  401, max= 8211, per=100.00%, avg=2439.09, stdev=858.01
    lat (usec) : 10=21.97%, 20=0.32%, 50=17.77%, 100=6.48%, 250=6.00%
    lat (usec) : 500=2.87%, 750=1.52%, 1000=1.95%
    lat (msec) : 2=5.61%, 4=4.16%, 10=4.15%, 20=3.08%, 50=3.74%
    lat (msec) : 100=4.39%, 250=14.56%, 500=1.33%, 750=0.08%, 1000=0.01%
    lat (msec) : 2000=0.01%
  cpu          : usr=71.35%, sys=18.70%, ctx=1958482, majf=0, minf=0
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=547974/w=137168/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=3270.8MB, aggrb=9636KB/s, minb=9636KB/s, maxb=9636KB/s, mint=347563msec, maxt=347563msec
  WRITE: io=845123KB, aggrb=2431KB/s, minb=2431KB/s, maxb=2431KB/s, mint=347563msec, maxt=347563msec

ps. ( Could not get it to work with 1 socket and 3 Vcpu's) this is a known limitation.
 
It is NexentaStor 4.0.4 , the current one.
I've turned off ACPI , just for testing purposes ( It was slow before anyway ).
I will try your config as soon as I get home from work :)

Thanks for your time for now :)
 
I dont know much about nexenta, but I do know that ZFS usually requires a lot of RAM to work well. I would allocate at least 8GB towards it, plus another 1GB for each TB of space.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!