Search results

  1. F

    PVE 2.3test - Unresolved issues

    Hello there. I still have some problems with the last pvetest repositary: - I am using a local mounted shared directory for ISO repositary: if I attach an iso file to a VM, it doesn't start because it can't access the file. No problem if I use a local storage. - I can't live migrate VM's...
  2. F

    QEMU 1.4, Ceph RBD support (pvetest)

    I have some upgrade: The first error (the missing function in Storage.pm) is my error. I have upgraded with "apt-get upgrade" and not with "apt-get dist-upgrade", so some library was not upgraded. Now I am upgrading all the nodes and try to resolve the live-migration issue. Sorry....
  3. F

    QEMU 1.4, Ceph RBD support (pvetest)

    There is another problem: live migration from an updated server to another (not still updated, but with the previous 2.3test), fails with: Feb 25 17:30:30 starting migration of VM 100 to node 'nodo02' (172.16.20.32) Feb 25 17:30:30 copying disk images Feb 25 17:30:30 starting VM 100 on remote...
  4. F

    QEMU 1.4, Ceph RBD support (pvetest)

    There is another problem. I have upgraded two hosts of the cluster, migrating all the VM's to the last host. Now I am trying to migrate back the VM to upgrade the last host, but I have an error: Feb 25 17:30:30 starting migration of VM 100 to node 'nodo02' (172.16.20.32) Feb 25 17:30:30...
  5. F

    QEMU 1.4, Ceph RBD support (pvetest)

    I addedd on Storage.pm: sub volume_is_base { my ($cfg, $volid) = @_; my ($sid, $volname) = parse_volume_id($volid, 1); return 0 if !$sid; if (my $scfg = $cfg->{ids}->{$sid}) { my $plugin = PVE::Storage::Plugin->lookup($scfg->{type}); my ($vtype, $name, $vmid...
  6. F

    QEMU 1.4, Ceph RBD support (pvetest)

    I confirm: the problem is line 4481 of QemuServer.pm: if (PVE::Storage::volume_is_base($storecfg, $volid)){ but in Storage.pm there is not a "volume_is_base" function Thanks, Fabrizio root@nodo01:/usr/share/perl5/PVE# pveversion -v pve-manager: 2.3-10 (pve-manager/2.3/499c7b4d)...
  7. F

    PVE 2.3 Ceph multiple disk images

    I think you have found the problem :) Thanks, Fabrizio
  8. F

    PVE 2.3 Ceph multiple disk images

    I think that the problem is with the rbd image format. Works with the virtual machine with "format type 1" disk image (the default format); doesn't work with the virtual machine with "format type 2" disk image (that supports cloning). I have created the first machine with the 2.2 PVE...
  9. F

    PVE 2.3 Ceph multiple disk images

    [root@ceph-1 ~]# rbd ls -l NAME SIZE PARENT FMT PROT LOCK vm-102-disk-1 120G 1 vm-102-disk-2 1024G 1 vm-104-disk-1 81920M 1 vm-105-disk-1 61440M 1 vm-110-disk-1 1024G 1 vm-104-disk-2 32768M 2 vm-104-disk-3 36864M...
  10. F

    PVE 2.3 Ceph multiple disk images

    I also noticed this differences with disk image names: [root@ceph-1 ~]# rados --pool=rbd ls | grep vm-10 vm-104-disk-1.rbd vm-102-disk-1.rbd rbd_id.vm-104-disk-2 vm-102-disk-2.rbd rbd_id.vm-106-disk-1 rbd_id.vm-104-disk-3 rbd_id.vm-108-disk-1 rbd_id.vm-107-disk-1 vm-105-disk-1.rbd
  11. F

    PVE 2.3 Ceph multiple disk images

    Hello. Thank's for your reply. This is my configuration: - 3 x ceph nodes (as kvm virtual machines, one per host, using all local space on separate disks); this is the version: [root@ceph-1 ~]# rpm -q ceph ceph-0.56.2-0.el6.x86_64 - 3 x proxmox hosts, with this version...
  12. F

    PVE 2.3 Ceph multiple disk images

    Hello. I am testing the last (yesterday) pvetest software. Using ceph storage backend, all works ok (not backup, of course); but if I add a second hard drive image on the ceph storage, i have an error (image vm-XXX-disk-1 already exists). Regards, Fabrizio Cuseo