pvetest updates and RC ISO installer

Also another thing. If you use disks with different sector layout in a pool ashift must follow the lowest common denominator since ashift is applied on the pool and not the individual disks. When the pool is created you cannot change the value for ashift so choose it wisely.

This method is seriously flawed since many disks misreport sector sizes for compatibility reasons. (And I also think you wanted to write smallest common multiple.) See this page with the array of misreporting disks: https://github.com/zfsonlinux/zfs/blob/master/cmd/zpool/zpool_vdev.c#L108

As it has been already noted, some amount of storage space is lost by always using ashift=12 (or even 13 for some SSDs nowadays), but it's dependent of the usage pattern, average number and size of files, etc. There are also provisions to make this parameter tunable by vdev, not only by pool: http://open-zfs.org/wiki/Performance_tuning#ashift

I hink it would be a good idea to make this configurable (for the entire pool) at installation time in the PVE installer.
 
Thanks. It's pretty much unusable for OVZ then. Maybe through NFS it's fine. Even locally. I might check that sometime.
 
Thanks. It's pretty much unusable for OVZ then. Maybe through NFS it's fine. Even locally. I might check that sometime.

Its not a problem to create zfs volume and format it to ext2. Just I use the server for my own purpose so i don`t need some missing functions.
 
I Just want to confirme what Alexandre said :

-ceph : osd daemons are not always starting at boot. (maybe related to /etc/pve and pve-cluster service ?)

I have to restart it several times, but not all OSD

6d80f7d0_m.png
 
I Just want to confirme what Alexandre said :

-ceph : osd daemons are not always starting at boot. (maybe related to /etc/pve and pve-cluster service ?)

I have to restart it several times, but not all OSD

6d80f7d0_m.png

I need to check that tomorrow.

(Try to reponse in pve-devel mailing list, it's difficult to follow the discussion with cross-posting between mailing and forum)
 
Also note that we have downgraded pve-qemu-kvm from 2.2 to 2.1, because live migration was unstable on some hosts. So please downgrade that package manually (using and wget .. and dpkg -i ..) if you already use the 2.2 version from pvetest.

No need to do that - you can downgrade the package with:
Code:
apt-get install pve-qemu-kvm=2.1-12
 
I've tried RC1 and RC2, creating a VM to test there Proxmox installation. In the single HD I tried to use Zfs as raid0. The strange thing is that at first try there is an error and installation abortes. If I try again with the same disk (and hitting F12 to select CDrom as boot device, since now HD has a boot sector), the installation is succesful.
When I first try I get this:
(after "create partitions" status message) I get: "unable to create zfs root pool'
With Ctrl+alt+F12 I see:
[...]
CRITICAL: **: unable to create '/root/.cache/dconf'; dconf will not work properly. at /usr/lib/perl5/Glib/Object/Introspection.pm
line 57.
[...]
cannot create 'rpool': no such pool or dataset
unable to create zfs root pool
umount: /rpool/ROOT/pve-1/var/lib/vz: not fount
idem for ...pve-1/tmp, proc and sys

Best regards
 
how much RAM do you use in your VM? increase it to 4 GB and try again.
 
First time 2GB, then 4GB and now I've tested again with 6GB, same problem. Btw, it's a quick test, did not you had time to try or works fine there?
here my vm config:
Code:
# cat /etc/pve/nodes/proxmox/qemu-server/802.conf
bootdisk: virtio0
cores: 2
ide2: hd2sata:iso/proxmox-ve_3.3-c4c740ea-18-rc2.iso,media=cdrom
memory: 6144
name: ZFSTEST
net0: virtio=2E:BE:82:CB:88:4A,bridge=vmbr0
ostype: l26
smbios1: uuid=e6039b99-1040-4b7d-9b81-b27a2639316c
sockets: 1
virtio0: hd2sata:802/vm-802-disk-1.qcow2,size=32G

and host is:
Code:
# pveversion -v
proxmox-ve-2.6.32: 3.3-139 (running kernel: 2.6.32-34-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-34-pve: 2.6.32-139
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-2
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
Best regards
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!