QEMU 1.4, Ceph RBD support (pvetest)

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
748
1,626
223
We just moved again a bunch of packages to our pvetest repository (on the road to Proxmox VE 2.3), including latest stable QEMU 1.4 and the GUI support for storing KVM VM disks on a Ceph RADOS Block Device (RBD) storage system.

Due to the new Backup and Restore implementation, KVM live backups of running virtual machines on Ceph RBD is no problem anymore - a quite unique feature and a big step forward.

Other small improvements and changes

  • qcow2 as default storage format, cache=none (previously raw)
  • KVM64 as default CPU type (previously qemu64)
  • e1000 as default NIC (previously rtl8139)
  • added omping to repo (for testing multicast between nodes)
  • task history per VM
  • enable/disable tablet for VM on GUI without stop/start of VM (you can use vmmouse instead, for lower CPU usage, works on modern Linux and on all Windows VMs as long as you install the vmmouse drivers)
  • Node Summary: added "KSM sharing" and "CPU Socket count"
Everybody is encouraged to test and give feedback!
__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
I updated the system and have a problem to start VMs.

TASK ERROR: Undefined subroutine &PVE::Storage::volume_is_base called at /usr/share/perl5/PVE/QemuServer.pm line 4481.

Do you know something about this??

Thanks!
 
post your 'pveversion -v' and 'qm config VMID'
 
post your 'pveversion -v' and 'qm config VMID'

I confirm:

the problem is line 4481 of QemuServer.pm:
if (PVE::Storage::volume_is_base($storecfg, $volid)){

but in Storage.pm there is not a "volume_is_base" function

Thanks, Fabrizio


root@nodo01:/usr/share/perl5/PVE# pveversion -v
pve-manager: 2.3-10 (pve-manager/2.3/499c7b4d)
running kernel: 2.6.32-18-pve
proxmox-ve-2.6.32: 2.3-88
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-18-pve: 2.6.32-88
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-13
pve-firmware: 1.0-21
libpve-common-perl: 1.0-48
libpve-access-control: 1.0-25
libpve-storage-perl: 2.3-2
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-6
ksm-control-daemon: 1.1-1

root@nodo01:/usr/share/perl5/PVE# qm config 110
bootdisk: sata0
cores: 1
ide2: none,media=cdrom
memory: 1024
name: Ceph-1
net0: e1000=22:80:3E:E7:E6:70,bridge=vmbr0
onboot: 1
ostype: l26
sata0: local:110/vm-110-disk-1.qcow2
sata1: Nodo1_Ceph_1:vm-110-disk-1,cache=unsafe,size=920G
sata2: Nodo1_Ceph_2:vm-110-disk-1,cache=unsafe,size=920G
sockets: 1
unused0: CephCluster:vm-110-disk-1
 
# pveversion -v
pve-manager: 2.3-10 (pve-manager/2.3/499c7b4d)
running kernel: 2.6.32-18-pve
proxmox-ve-2.6.32: 2.3-88
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-13
pve-firmware: 1.0-21
libpve-common-perl: 1.0-48
libpve-access-control: 1.0-25
libpve-storage-perl: 2.3-2
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-6
ksm-control-daemon: 1.1-1


root@cetamox-02:~# qm config 102
balloon: 512
bootdisk: ide0
cores: 4
ide0: datastore01:vm-102-disk-1,size=11G
ide2: none,media=cdrom
memory: 1024
name: proxmoxtest03
net0: rtl8139=00:50:56:00:00:62,bridge=vmbr0,tag=4
ostype: l26
sockets: 1


Thanks
 
I addedd on Storage.pm:

sub volume_is_base {
my ($cfg, $volid) = @_;

my ($sid, $volname) = parse_volume_id($volid, 1);
return 0 if !$sid;

if (my $scfg = $cfg->{ids}->{$sid}) {
my $plugin = PVE::Storage::plugin->lookup($scfg->{type});
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
$plugin->parse_volname($volname);
return $isBase ? 1 : 0;
} else {
# stale volid with undefined storage - so we can just guess
if ($volid =~ m/base-/) {
return 1;
}
}

return 0;
}

I don't know if it is correct, but now the VM starts.



# pveversion -v
pve-manager: 2.3-10 (pve-manager/2.3/499c7b4d)
running kernel: 2.6.32-18-pve
proxmox-ve-2.6.32: 2.3-88
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-13
pve-firmware: 1.0-21
libpve-common-perl: 1.0-48
libpve-access-control: 1.0-25
libpve-storage-perl: 2.3-2
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-6
ksm-control-daemon: 1.1-1


root@cetamox-02:~# qm config 102
balloon: 512
bootdisk: ide0
cores: 4
ide0: datastore01:vm-102-disk-1,size=11G
ide2: none,media=cdrom
memory: 1024
name: proxmoxtest03
net0: rtl8139=00:50:56:00:00:62,bridge=vmbr0,tag=4
ostype: l26
sockets: 1


Thanks

- - - Updated - - -

Sorry... added as is in this file:

https://github.com/proxmox/pve-storage/blob/master/PVE/Storage.pm
 
There is another problem.

I have upgraded two hosts of the cluster, migrating all the VM's to the last host.
Now I am trying to migrate back the VM to upgrade the last host, but I have an error:

Feb 25 17:30:30 starting migration of VM 100 to node 'nodo02' (172.16.20.32)
Feb 25 17:30:30 copying disk images
Feb 25 17:30:30 starting VM 100 on remote node 'nodo02'
Feb 25 17:30:31 starting migration tunnel
Feb 25 17:30:32 starting online/live migration on port 60000
Feb 25 17:30:32 migrate_set_speed: 8589934592
Feb 25 17:30:32 migrate_set_downtime: 0.1
Feb 25 17:30:34 ERROR: online migrate failure - aborting
Feb 25 17:30:34 aborting phase 2 - cleanup resources
Feb 25 17:30:34 migrate_cancel
Feb 25 17:30:34 ERROR: migration finished with problems (duration 00:00:05)
TASK ERROR: migration problems

- - - Updated - - -

There is another problem.

I have upgraded two hosts of the cluster, migrating all the VM's to the last host.
Now I am trying to migrate back the VM to upgrade the last host, but I have an error:

Feb 25 17:30:30 starting migration of VM 100 to node 'nodo02' (172.16.20.32)
Feb 25 17:30:30 copying disk images
Feb 25 17:30:30 starting VM 100 on remote node 'nodo02'
Feb 25 17:30:31 starting migration tunnel
Feb 25 17:30:32 starting online/live migration on port 60000
Feb 25 17:30:32 migrate_set_speed: 8589934592
Feb 25 17:30:32 migrate_set_downtime: 0.1
Feb 25 17:30:34 ERROR: online migrate failure - aborting
Feb 25 17:30:34 aborting phase 2 - cleanup resources
Feb 25 17:30:34 migrate_cancel
Feb 25 17:30:34 ERROR: migration finished with problems (duration 00:00:05)
TASK ERROR: migration problems

- - - Updated - - -

There is another problem.

I have upgraded two hosts of the cluster, migrating all the VM's to the last host.
Now I am trying to migrate back the VM to upgrade the last host, but I have an error:

Feb 25 17:30:30 starting migration of VM 100 to node 'nodo02' (172.16.20.32)
Feb 25 17:30:30 copying disk images
Feb 25 17:30:30 starting VM 100 on remote node 'nodo02'
Feb 25 17:30:31 starting migration tunnel
Feb 25 17:30:32 starting online/live migration on port 60000
Feb 25 17:30:32 migrate_set_speed: 8589934592
Feb 25 17:30:32 migrate_set_downtime: 0.1
Feb 25 17:30:34 ERROR: online migrate failure - aborting
Feb 25 17:30:34 aborting phase 2 - cleanup resources
Feb 25 17:30:34 migrate_cancel
Feb 25 17:30:34 ERROR: migration finished with problems (duration 00:00:05)
TASK ERROR: migration problems
 
There is another problem:

live migration from an updated server to another (not still updated, but with the previous 2.3test), fails with:

Feb 25 17:30:30 starting migration of VM 100 to node 'nodo02' (172.16.20.32)
Feb 25 17:30:30 copying disk images
Feb 25 17:30:30 starting VM 100 on remote node 'nodo02'
Feb 25 17:30:31 starting migration tunnel
Feb 25 17:30:32 starting online/live migration on port 60000
Feb 25 17:30:32 migrate_set_speed: 8589934592
Feb 25 17:30:32 migrate_set_downtime: 0.1
Feb 25 17:30:34 ERROR: online migrate failure - aborting
Feb 25 17:30:34 aborting phase 2 - cleanup resources
Feb 25 17:30:34 migrate_cancel
Feb 25 17:30:34 ERROR: migration finished with problems (duration 00:00:05)
TASK ERROR: migration problems
 
I have some upgrade:

The first error (the missing function in Storage.pm) is my error.
I have upgraded with "apt-get upgrade" and not with "apt-get dist-upgrade", so some library was not upgraded.
Now I am upgrading all the nodes and try to resolve the live-migration issue.
Sorry....
 
Other small improvements and changes

  • KVM64 as default CPU type (previously qemu64)
__________________
Best regards,

Martin Maurer
Proxmox VE project leader

So KVM64 is now the recommended cpu type for the VM?
Should I change the cpu type on my VMs from qemu64 to this?
I think this can cause activation issues on windows VMs...
 
I get some problem with offline KVM migration with last update
problem code here
...
@@ -246,11 +246,11 @@
die "can't migrate '$volid' - storagy type '$scfg->{type}' not supported\n"
if $scfg->{type} ne 'dir';

+ #if file, check if a backing file exist
+ if(($scfg->{type} eq 'dir') && (!$sharedvm)){
+ my (undef, undef, undef, $parent) = PVE::Storage::volume_size_info($self->{storecfg}, $volid, 1);
+ die "can't migrate '$volid' as it's a clone of '$parent'";
+ }
}
in file /usr/share/perl5/PVE/QemuMigrate.pm

with this add check migration failed with next message
qm migrate 101 proxmox0
Feb 26 15:09:35 starting migration of VM 101 to node 'proxmox0' (192.168.0.140)
Feb 26 15:09:35 copying disk images
Use of uninitialized value $parent in concatenation (.) or string at /usr/share/perl5/PVE/QemuMigrate.pm line 252.
Feb 26 15:09:35 ERROR: Failed to sync data - can't migrate 'local:101/vm-101-disk-1.qcow2' as it's a clone of '' at /usr/share/perl5/PVE/QemuMigrate.pm line 252.
Feb 26 15:09:35 aborting phase 1 - cleanup resources
Feb 26 15:09:35 ERROR: migration aborted (duration 00:00:00): Failed to sync data - can't migrate 'local:101/vm-101-disk-1.qcow2' as it's a clone of '' at /usr/share/perl5/PVE/QemuMigrate.pm line 252.
migration aborted

pveversion -v
pve-manager: 2.3-10 (pve-manager/2.3/499c7b4d)
running kernel: 2.6.32-18-pve
proxmox-ve-2.6.32: 2.3-88
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-13
pve-firmware: 1.0-21
libpve-common-perl: 1.0-48
libpve-access-control: 1.0-25
libpve-storage-perl: 2.3-3
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-6
ksm-control-daemon: 1.1-1
 
What kind of Ceph RBD support is included, other then what was already included in the stable ?

you can add RBD Storage via GUI, just like NFS or iSCSI. See "Datacenter/Storage/Add: RBD".

modprobe rbd

still gives a module not found ?

the module is not needed for this our use case (storing VM disk on RBD).

So its still not possible to map a rbd device on the host.
This would be awesome to run OpenVZ on ....

OpenVZ on Proxmox VE does not support block storage, so how do you plan to use a rbd device?
 
OpenVZ on Proxmox VE does not support block storage, so how do you plan to use a rbd device?

mapping the rbd device like

rbd map rbdname

format the rbd device and then mount it to a directory, which you can then add to proxmox and run your containers on.
Doing the same now in a test, only now I have mounted the rbd on a proxy and running a iSCSI export to export the device, then add it to proxmox and mount it, a bit of a poor solution but is working excellent in the test so far.
So adding the rbd block module would be great as it saves a lot of steps.

See http://ceph.com/docs/master/start/quick-rbd/ for the same solution.
 
reload page and clear browser cache. should be there.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!