New Kernel, stable KVM 1.1, sheepdog and ceph (pvetest)

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
748
1,627
223
We just released a bunch of new packages to our test repository (pvetest), including latest stable OpenVZ Kernel (042stab057.1), stable KVM 1.1, new cluster package, bug fixes and code cleanups.

Additionally, we added first packages to support two distributed storage technologies - ceph (client) and sheepdog. Both technologies are looking great but note, currently it’s not yet ready for production use. sheepdog support is not yet completed but we are working on this.

Important Note for HA setups:
the redhat-cluster-pve package provides new defaults and you need to accept the new one - answer with Y here. as soon as the installations is finished, you need to enable fencing again, see http://pve.proxmox.com/wiki/Fencing#Enable_fencing_on_all_nodes

Everybody is encouraged to test and give feedback!

After installation and reboot (aptitude update && aptitude full-upgrade), your 'pveversion -v' should look like this:

Code:
pveversion -v

pve-manager: 2.1-11 (pve-manager/2.1/4f19868f)
running kernel: 2.6.32-13-pve
proxmox-ve-2.6.32: 2.1-71
pve-kernel-2.6.32-13-pve: 2.6.32-71
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.92-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.8-1
pve-cluster: 1.0-27
qemu-server: 2.0-42
pve-firmware: 1.0-17
libpve-common-perl: 1.0-28
libpve-access-control: 1.0-24
libpve-storage-perl: 2.0-20
vncterm: 1.0-2
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.1-4
ksm-control-daemon: 1.1-1
 
Hi,

i did
root@server:~# apt-get dist-upgrade
and
root@server:~# aptitude update && aptitude full-upgrade

but its still the same... (only pve-kernel got installed, i'm see 2 versions of them)

Code:
root@server:~# pveversion -v
pve-manager: 2.1-1 (pve-manager/2.1/f9b0f63a)
running kernel: 2.6.32-11-pve
proxmox-ve-2.6.32: 2.1-68
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-12-pve: 2.6.32-68
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-2
pve-cluster: 1.0-26
qemu-server: 2.0-39
pve-firmware: 1.0-16
libpve-common-perl: 1.0-27
libpve-access-control: 1.0-21
libpve-storage-perl: 2.0-18
vncterm: 1.0-2
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-9
ksm-control-daemon: 1.1-1

Code:
root@server:~# apt-get dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.


last few lines of output after # aptitude update && aptitude full-upgrade

Code:
Preparing to replace ssh 1:5.5p1-6+squeeze1 (using .../ssh_1%3a5.5p1-6+squeeze2_all.deb) ...
Unpacking replacement ssh ...
Processing triggers for man-db ...
Processing triggers for pve-manager ...
Restarting PVE Daemon: pvedaemon.
Restarting PVE Status Daemon: pvestatd.
Restarting web server: apache2.
Setting up libc6-i386 (2.11.3-3) ...
Setting up libssl0.9.8 (0.9.8o-4squeeze13) ...
Setting up procps (1:3.2.8-9squeeze1) ...
Setting kernel variables ...done.
Setting up at (3.1.12-1+squeeze1) ...
Starting deferred execution scheduler: atd.
Setting up libxml2 (2.7.8.dfsg-2+squeeze4) ...
Setting up libisc62 (1:9.7.3.dfsg-1~squeeze5) ...
Setting up libdns69 (1:9.7.3.dfsg-1~squeeze5) ...
Setting up libisccc60 (1:9.7.3.dfsg-1~squeeze5) ...
Setting up libisccfg62 (1:9.7.3.dfsg-1~squeeze5) ...
Setting up libbind9-60 (1:9.7.3.dfsg-1~squeeze5) ...
Setting up liblwres60 (1:9.7.3.dfsg-1~squeeze5) ...
Setting up bind9-host (1:9.7.3.dfsg-1~squeeze5) ...
Setting up dnsutils (1:9.7.3.dfsg-1~squeeze5) ...
Setting up libmagic1 (5.04-5+squeeze2) ...
Setting up file (5.04-5+squeeze2) ...
Setting up libtasn1-3 (2.7-1+squeeze+1) ...
Setting up libgnutls26 (2.8.6-1+squeeze2) ...
Setting up locales (2.11.3-3) ...
Generating locales (this might take a while)...
  en_US.UTF-8... done
Generation complete.
Setting up openssh-client (1:5.5p1-6+squeeze2) ...
Setting up openssh-server (1:5.5p1-6+squeeze2) ...
Restarting OpenBSD Secure Shell server: sshd.
Setting up python-minimal (2.6.6-3+squeeze7) ...
Setting up python (2.6.6-3+squeeze7) ...
Setting up libapr1 (1.4.2-6+squeeze4) ...
Setting up apache2.2-bin (2.2.16-6+squeeze7) ...
Setting up apache2-utils (2.2.16-6+squeeze7) ...
Setting up apache2.2-common (2.2.16-6+squeeze7) ...
Installing new version of config file /etc/apache2/sites-available/default-ssl ...
Installing new version of config file /etc/apache2/sites-available/default ...
Setting up apache2-mpm-prefork (2.2.16-6+squeeze7) ...
Starting web server: apache2httpd (pid 315481) already running
.
Setting up apache2 (2.2.16-6+squeeze7) ...
Setting up libcurl3-gnutls (7.21.0-2.1+squeeze2) ...
Setting up libfreetype6 (2.4.2-2.1+squeeze4) ...
Setting up libnss3-1d (3.12.8-1+squeeze5) ...
Setting up libpng12-0 (1.2.44-1+squeeze4) ...
Setting up libxml2-utils (2.7.8.dfsg-2+squeeze4) ...
Setting up openssl (0.9.8o-4squeeze13) ...
Setting up pve-kernel-2.6.32-12-pve (2.6.32-68) ...
update-initramfs: Generating /boot/initrd.img-2.6.32-12-pve
Generating grub.cfg ...
Found linux image: /boot/vmlinuz-2.6.32-12-pve
Found initrd image: /boot/initrd.img-2.6.32-12-pve
Found linux image: /boot/vmlinuz-2.6.32-11-pve
Found initrd image: /boot/initrd.img-2.6.32-11-pve
Found memtest86+ image: /memtest86+.bin
Found memtest86+ multiboot image: /memtest86+_multiboot.bin
done
Setting up pve-firmware (1.0-16) ...
Setting up proxmox-ve-2.6.32 (2.1-68) ...
installing proxmox release key: OK
Setting up samba-common (2:3.5.6~dfsg-3squeeze8) ...
Setting up ssh (1:5.5p1-6+squeeze2) ...

Current status: 0 updates [-46].

any advice?
Thanks
 
Last edited by a moderator:
did you enable pvetest in sources.list?
 
in /etc/apt/sources.list

Code:
deb http://ftp.at.debian.org/debian squeeze main contrib


# PVE packages provided by proxmox.com
# deb http://download.proxmox.com/debian squeeze pve


[COLOR=#ff0000]# PVE test packages provided by proxmox.com[/COLOR]
[COLOR=#ff0000]deb http://download.proxmox.com/debian squeeze pvetest[/COLOR]


# security updates
deb http://security.debian.org/ squeeze/updates main contrib

http://forum.proxmox.com/threads/9651-New-2-6-32-Kernel-for-Proxmox-VE-2-1-(pvetest)
 
Last edited:
Thanks ebiss

Upgraded :)
in /etc/apt/sources.list

Code:
deb http://ftp.at.debian.org/debian squeeze main contrib


# PVE packages provided by proxmox.com
# deb http://download.proxmox.com/debian squeeze pve


[COLOR=#ff0000]# PVE test packages provided by proxmox.com[/COLOR]
[COLOR=#ff0000]deb http://download.proxmox.com/debian squeeze pvetest[/COLOR]


# security updates
deb http://security.debian.org/ squeeze/updates main contrib

http://forum.proxmox.com/threads/9651-New-2-6-32-Kernel-for-Proxmox-VE-2-1-(pvetest)
 
you will need to read the docs from their pages (sheepdoc, ceph) and our source code. currently there is no user documentation as its on early stage.
 
Hi Dietmar
Have you play with Proxmox and Sheepdog? Can you give some feedback?

PS: I cant test it because i dont have some hardware to test in this moment.
 
Can someone clarify how sheepdog works for me?

I see that it allows HA without centralized storage, and I understand the desire to remove that single point-of-failure, but if I'm understanding correctly and it is in fact replicating the VMs across the nodes. Won't that be an exceptionally expensive way to store large amounts of data (particularly as the HA cluster grows)?

I noticed the wiki mentioned that it decides where to store things via a hash table data structure, does this mean it is actually acting more like a striped RAID array across the hosts? If this were the case, wouldn't taking a node down for service cause the HA cluster to start "rebuilding" the data (effectively orphaning the node in service) and/or leave the system in a vulnerable state should another node go down?

Sorry for all the questions, this sheepdog system sounds very interesting.
 
I noticed the wiki mentioned that it decides where to store things via a hash table data structure, does this mean it is actually acting more like a striped RAID array across the hosts? If this were the case, wouldn't taking a node down for service cause the HA cluster to start "rebuilding" the data (effectively orphaning the node in service) and/or leave the system in a vulnerable state should another node go down?

This is the wrong thread (please open a separate thread, or better, discuss on the sheepdog user list)
 
Today I've installed 2.1 and upgraded to pvetest so I have kvm 1.1.
So far I've had those issues:
1) was unable to add LVM storage using web interface. I've created a "sas-data" volume group, selected storage, add, LVM, put "sas" as name, base storage "Existing LVM groups" but volume group drop down list is empty and doesn't expand. I've added it manually in /etc/pve/storage.cfg
Code:
...
lvm: sas
        vgname sas-data
        content images

and I've been able to use it when I created a kvm vm. BTW, I've also tried with sasdata instead of sas-data but was the same problem.

2) but is a long standing issue, don't know if you can fix or we just have to adapt, adding a dvd device pointing to an iso with spaces in the name does not work (the cdrom is not showed in the web interface)
In particular I was adding: Windows server 2003 STD 64Bit E-Open CD1.iso

I'm reading about problems with Intel e1000 interface, I have to cross my fingers because I've the 2 nic of that brand/model...

Code:
root@proxmox:/var/lib/vz/template/iso# pveversion -v
pve-manager: 2.1-12 (pve-manager/2.1/be112d89)
running kernel: 2.6.32-13-pve
proxmox-ve-2.6.32: 2.1-71
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-13-pve: 2.6.32-71
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.92-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.8-1
pve-cluster: 1.0-27
qemu-server: 2.0-45
pve-firmware: 1.0-17
libpve-common-perl: 1.0-28
libpve-access-control: 1.0-24
libpve-storage-perl: 2.0-23
vncterm: 1.0-2
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.1-6
ksm-control-daemon: 1.1-1

Thanks in advance and hope you can fix those issue before release.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!