Proxmox VE 3.4 released!

Status
Not open for further replies.
Hello,
I have read it, but, is it possible to have with 2 nodes :
- zfs raid 1 for OS debian / proxmox
- zfs raid 1 for VMs with DRBD ?

Thank you for this very good job :p

Zfs has no nodes. It has POOLS. In pool you can create multiple zfs file systems with different file systems settings. Or you can create file system volume and format it to another file system and use it like you want to.

example of main zfs

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
nmz_zfs 2.96T 619G 120G /media/nmz_zfs
nmz_zfs/Backups 37.6G 619G 37.6G /media/nmz_zfs/Backups
.....
nmz_zfs/crypt 516G 1.06T 48.2G -
nmz_zfs/vm 43.1G 619G 40.0K /media/nmz_vm
nmz_zfs/vm/images 43.1G 619G 2.52M /media/nmz_vm/images
nmz_zfs/vm/images/200 26.9G 619G 26.9G /media/nmz_vm/images/200
nmz_zfs/vm/images/203 16.2G 619G 16.2G /media/nmz_vm/images/203
zfs_mirror 836G 77.5G 37K /media/zfs_mirror
....
zfs_mirror/images 77.8G 77.5G 35K /media/zfs_mirror/images
zfs_mirror/images/201 58.9G 77.5G 47.3G /media/zfs_mirror/images/201
zfs_mirror/images/300 6.56G 77.5G 6.56G /media/zfs_mirror/images/300
zfs_mirror/images/301 12.3G 77.5G 12.3G /media/zfs_mirror/images/301
zfs_mirror/private 57.9G 77.5G 65K /media/zfs_mirror/private
zfs_mirror/private/100 406M 77.5G 406M /media/zfs_mirror/private/100
zfs_mirror/private/101 5.52G 77.5G 5.52G /media/zfs_mirror/private/101
zfs_mirror/private/102 11.0G 77.5G 11.0G /media/zfs_mirror/private/102
zfs_mirror/private/103 23.4G 77.5G 23.4G /media/zfs_mirror/private/103
zfs_mirror/private/104 541M 77.5G 541M /media/zfs_mirror/private/104
zfs_mirror/private/105 465M 77.5G 465M /media/zfs_mirror/private/105
zfs_mirror/private/106 9.73G 77.5G 9.73G /media/zfs_mirror/private/106
zfs_mirror/private/107 668M 77.5G 631M /media/zfs_mirror/private/107
zfs_mirror/private/108 359M 77.5G 359M /media/zfs_mirror/private/108
zfs_mirror/private/109 562M 77.5G 562M /media/zfs_mirror/private/109
zfs_mirror/private/110 849M 77.5G 587M /media/zfs_mirror/private/110
zfs_mirror/private/111 1.59G 77.5G 1.59G /media/zfs_mirror/private/111
zfs_mirror/private/112 347M 77.5G 347M /media/zfs_mirror/private/112
zfs_mirror/private/114 602M 77.5G 602M /media/zfs_mirror/private/114
zfs_mirror/private/120 366M 77.5G 366M /media/zfs_mirror/private/120
zfs_mirror/private/121 449M 77.5G 449M /media/zfs_mirror/private/121
zfs_mirror/private/122 567M 77.5G 567M /media/zfs_mirror/private/122
zfs_mirror/private/130 637M 77.5G 637M /media/zfs_mirror/private/130
zfs_mirror/template 3.47G 77.5G 3.47G /media/zfs_mirror/template

As for volume
# truecrypt -l
1: /dev/zvol/nmz_zfs/crypt /dev/mapper/truecrypt1 /media/truecrypt1

/dev/mapper/truecrypt1 on /media/truecrypt1 type ext2 (rw,relatime,errors=continue,user_xattr,acl)
 
Hello

From http://pve.proxmox.com/wiki/ZFS

If you are experimenting with an installation inside a VM, don't use Virtio for disks, since are not supported by ZFS. Use IDE or SCSI instead.

Can someone explain this "experimenting with an installation inside a VM" please

So in new VMs with proxomx 3.4 and ZFS can someone select virtio for disks or not?


In a 2 disk mirror zfs Proxmox 3.4 test server, i have created 2 vms

One with ZFS storage (Datacenter-> Add -> ZFS -> rpool etc) and one with local storage

Both with Bus virtio and Raw disks (in ZFS this is the only option)

The local storage VM must have cache to Write Back in order to worl

But both seem to work with virtio Bus selected in disks

So my question is, can we have virtio bus with ZFS?


Some more notes
Snapshots are available only with ZFS storage in raw disks (not with local storage)
Local storage images are stored the usual way


root@host:/var/lib/vz/images/1602#
rw-r--r-- 1 root root 34359738368 Feb 22 15:00 vm-1602-disk-1.raw


But ZFS images are stored this way

lrwxrwxrwx 1 root root 10 Feb 22 14:44 swap -> ../../zd16
lrwxrwxrwx 1 root root 10 Feb 22 14:44 vm-1601-disk-1 -> ../../zd32
lrwxrwxrwx 1 root root 12 Feb 22 14:44 vm-1601-disk-1-part1 -> ../../zd32p1
lrwxrwxrwx 1 root root 12 Feb 22 14:44 vm-1601-disk-1-part2 -> ../../zd32p2
lrwxrwxrwx 1 root root 9 Feb 22 14:44 vm-1601-state-s1 -> ../../zd0


So in order to move a image from one host to an other, what is the proper way with ZFS-style images ?
 
Well ZFS with the option for snapshots in raw disks sounds very good
but unfortunately is seems (to me) that there are some performance issues

I run some dd test to check write speed
I have installed a host with 2 2TB disks Porxomox 3.4 ZFS mirror
Created ZFS storage (Datacenter-> Add -> ZFS -> rpool etc)

Then i have created 4 VMs with 4 cpus and 8 GB Ram each
VM1 BUS=IDE STORAGE=Local
VM2 BUS=IDE STORAGE=ZFS
VM3 BUS=Virtio STORAGE=Local
VM4 BUS=Virtio STORAGE=ZFS

Then run those test
dd if=/dev/zero of=test bs=1M count=1024 conv=fdatasync
dd if=/dev/zero of=test bs=1024 count=1024 conv=fdatasync

on Host and on VMs


HOST Server With zfs 2disks mirror
root@host:~# dd if=/dev/zero of=test bs=1M count=1024 conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.530615 s, 2.0 GB/s
root@h16:~# dd if=/dev/zero of=test bs=1024 count=1024 conv=fdatasync
1024+0 records in
1024+0 records out
1048576 bytes (1.0 MB) copied, 0.0429473 s, 24.4 MB/s



VM IDE LOCAL RAW
[root@idelocalraw ~]# dd if=/dev/zero of=test bs=1M count=1024 conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.18615 s, 337 MB/s
[root@idelocalraw ~]# dd if=/dev/zero of=test bs=1024 count=1024 conv=fdatasync
1024+0 records in
1024+0 records out
1048576 bytes (1.0 MB) copied, 0.0491059 s, 21.4 MB/s

VM IDE ZFS RAW
[root@idezfsraw ~]# dd if=/dev/zero of=test bs=1M count=1024 conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.2014 s, 206 MB/s
[root@idezfsraw ~]# dd if=/dev/zero of=test bs=1024 count=1024 conv=fdatasync
1024+0 records in
1024+0 records out
1048576 bytes (1.0 MB) copied, 0.0750773 s, 14.0 MB/s
[root@idezfsraw ~]#


VM VIRTIO LOCAL RAW
[root@virtiolocalraw ~]# dd if=/dev/zero of=test bs=1M count=1024 conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.6096 s, 411 MB/s
[root@virtiolocalraw ~]# dd if=/dev/zero of=test bs=1024 count=1024 conv=fdatasync
1024+0 records in
1024+0 records out
1048576 bytes (1.0 MB) copied, 0.0608047 s, 17.2 MB/s
[root@virtiolocalraw ~]#


VM VIRTIO ZFS RAW
[root@virtiozfsraw ~]# dd if=/dev/zero of=test bs=1M count=1024 conv=fdatasync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 12.2036 s, 88.0 MB/s !!!
[root@virtiozfsraw ~]# dd if=/dev/zero of=test bs=1024 count=1024 conv=fdatasync
1024+0 records in
1024+0 records out
1048576 bytes (1.0 MB) copied, 0.0715242 s, 14.7 MB/s


Maybe a zfs guru can provide us with some fine tunning tricks
 
"zfs set sync=disabled pool_name" speed up a lot ;)

As for ZIL I used Samsung SSD but ZFS do not work with TRIM command and its starts to eat SSD cells.
 
Setting sync=disabled is dangerous since this will tell zfs to inform the VM that any request for acknowledge that data is persisted to disk should be done when data is in memory and before data is actually persisted to persistent storage. This could cause serious data loss!

Regarding TRIM. To compensate for the missing implementation of discard in ZFS you are advised to leave a certain amount of the SSD on-partitioned for the disk controller to use for spare cells.
 
The problem is not the TRIM but the ZIL of zfs. For example zfs flush data every ~5 seconds. If your ZIL writes to ssd it starts to write from beginning and after flush ZIL starts from the beginning again. It does not write continuously. As for enterprise/production you have to count your ssd as short use except for ZeusIOPS
 
After upgrading all my hypervisors to version 3.4 I am not able to do any migration, because the virtual server will become unavailable and in some cases completely freeze! First I thought it had something to do with the ARP time-out, but this is not the case. Before the upgrade migrating was working fine.

It sounds to me that the “I'm over here” stage (Broadcast Ethernet packet to announce new location of NICs) in the QEMU migration process is not being triggered. Is this may be related to the - E1000/disconnected - functionality changes in this release? Any help appreciated.
 
Last edited:
Nice to see this updates!
But i got a problem someone got this too?
I use writeback cache (in proxmox) and i got 2x more IO Delay. I used the system with 10% i now installed the updates restarted, vps started and now run the same applications with the same usage and now the io delay is 20%. :/
 
Status
Not open for further replies.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!