Proxmox VE 2.3 released!

thanks for new backup feature. One small question, do you plan to make it possible to browse backup files? So we would not need to restore it to take certain files only.

no, this is not possible (by design).
 
Am I taking wrong or possible with command line you mean?

no, you cannot extract single files (as there are no files in the backup).

you need to restore the full backup, then you can extract the files you need.
 
Button "Take snapshot" is not working

"snapshot feature is not available at /usr/share/perl5/PVE/QemuServer.pm line 4107."

pveversion -v
pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-18-pve
proxmox-ve-2.6.32: 2.3-88
pve-kernel-2.6.32-14-pve: 2.6.32-74
pve-kernel-2.6.32-17-pve: 2.6.32-83
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-17
pve-firmware: 1.0-21
libpve-common-perl: 1.0-48
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-6
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-6
ksm-control-daemon: 1.1-1
 
Button "Take snapshot" is not working

"snapshot feature is not available at /usr/share/perl5/PVE/QemuServer.pm line 4107."

pveversion -v
pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-18-pve
proxmox-ve-2.6.32: 2.3-88
pve-kernel-2.6.32-14-pve: 2.6.32-74
pve-kernel-2.6.32-17-pve: 2.6.32-83
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-17
pve-firmware: 1.0-21
libpve-common-perl: 1.0-48
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-6
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-6
ksm-control-daemon: 1.1-1

live snapshot are only available for qcow2,rbd,sheepdog storage. (no lvm or raw files)
 
Do you suggest to use VirtIO or SATA when creating a new hard disk for a new virtual machine?
 
I do virtio for linux and ide for windows.

Sent from one of my Nexus Androids or Debian linux.
 
Just did an upgrade last night. I first tested memory Ballooning in windows 2008r2 and got it working! Thanks for this. I will be building several Win7 virtual dev workstations with 8gigs ram for each vm. Having ballooning will help to make better use of host ram.

I did notice a couple of small issues. (1) the release notes say qcow2 is now the defaut disk type but when I go to create a new vm, its still defaulting to raw (not a big deal). (2) I tested a live backup of a vm that has a qcow2 disk but when I restored it, it came back as a .raw disk!??? I was able to convert it back to qcow2 but this is clearly a bug.

All in all a great release!!! Thanks to all the developers working to make this a great platform!!! :)

-Glen
 
Last edited:
Just did an upgrade last night. I first tested memory Ballooning in windows 2008r2 and got it working! Thanks for this. I will be building several Win7 virtual dev workstations with 8gigs ram for each vm. Having ballooning will help to make better use of host ram.

I did notice a couple of small issues. (1) the release notes say qcow2 is now the defaut disk type but when I go to create a new vm, its still defaulting to raw (not a big deal).

either you run an old version or you have some cached pages in your browser. update to latest, clear browser cache, reload page.


(2) I tested a live backup of a vm that has a qcow2 disk but when I restored it, it came back as a .raw disk!??? I was able to convert it back to qcow2 but this is clearly a bug.
are you sure - recheck and tell how to re-produce the issue.
 
Tom,


As per your request I did a quick test. Here is what I found.


----------------------------------------------------------------------
Backup of a simple install of Ubuntu using a single 128G qcow2 disk.
Using the VM's backup tab I selected: Backup Now, [Local, Snapshot, LZO (fast)].
Here is the output:
----------------------------------------------------------------------
INFO: starting new backup job: vzdump 200 --remove 0 --mode snapshot --compress lzo --storage local --node pm1
INFO: Starting Backup of VM 200 (qemu)
INFO: status = running
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/var/lib/vz/dump/vzdump-qemu-200-2013_03_15-09_00_47.vma.lzo'
INFO: started backup task '10547002-2e8b-4074-a19a-0974d887e6e0'
INFO: status: 0% (475463680/137438953472), sparse 0% (149598208), duration 3, 158/108 MB/s
INFO: status: 1% (1750466560/137438953472), sparse 0% (1011449856), duration 8, 255/82 MB/s
...
INFO: status: 100% (137438953472/137438953472), sparse 97% (134683037696), duration 105, 623/0 MB/s
INFO: transferred 137438 MB in 105 seconds (1308 MB/s)
INFO: archive file size: 1.22GB
INFO: Finished Backup of VM 200 (00:01:49)
INFO: Backup job finished successfully
TASK OK
----------------------------------------------------------------------


Now for the restore...


Note: there are no options other then shuting down the vm and selecting the restore image.


Here is the output:
----------------------------------------------------------------------
restore vma archive: lzop -d -c /var/lib/vz/dump/vzdump-qemu-200-2013_03_15-09_00_47.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp30265.fifo - /var/tmp/vzdumptmp30265
CFG: size: 243 name: qemu-server.conf
DEV: dev_id=1 size: 137438953472 devname: drive-virtio0
CTIME: Fri Mar 15 09:00:49 2013
Formatting '/var/lib/vz/images/200/vm-200-disk-1.raw', fmt=raw size=137438953472
new volume ID is 'local:200/vm-200-disk-1.raw'
map 'drive-virtio0' to '/var/lib/vz/images/200/vm-200-disk-1.raw' (write zeros = 0)
----------------------------------------------------------------------


As you can see it restored it to raw... It did start and run fine though.


I think it would be best if when restoring there was an option of what type
disk format to restore too and maybe show the original as the default.


As for the other issue of creating a new vm and the disk not defaulting to qcow2. Clearing the broswer cache cleared this issue.


Thanks!


-Glen
 
Tom,


As per your request I did a quick test. Here is what I found.


----------------------------------------------------------------------
Backup of a simple install of Ubuntu using a single 128G qcow2 disk.
Using the VM's backup tab I selected: Backup Now, [Local, Snapshot, LZO (fast)].
Here is the output:
----------------------------------------------------------------------
INFO: starting new backup job: vzdump 200 --remove 0 --mode snapshot --compress lzo --storage local --node pm1
INFO: Starting Backup of VM 200 (qemu)
INFO: status = running
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/var/lib/vz/dump/vzdump-qemu-200-2013_03_15-09_00_47.vma.lzo'
INFO: started backup task '10547002-2e8b-4074-a19a-0974d887e6e0'
INFO: status: 0% (475463680/137438953472), sparse 0% (149598208), duration 3, 158/108 MB/s
INFO: status: 1% (1750466560/137438953472), sparse 0% (1011449856), duration 8, 255/82 MB/s
...
INFO: status: 100% (137438953472/137438953472), sparse 97% (134683037696), duration 105, 623/0 MB/s
INFO: transferred 137438 MB in 105 seconds (1308 MB/s)
INFO: archive file size: 1.22GB
INFO: Finished Backup of VM 200 (00:01:49)
INFO: Backup job finished successfully
TASK OK
----------------------------------------------------------------------


Now for the restore...


Note: there are no options other then shuting down the vm and selecting the restore image.


Here is the output:
----------------------------------------------------------------------
restore vma archive: lzop -d -c /var/lib/vz/dump/vzdump-qemu-200-2013_03_15-09_00_47.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp30265.fifo - /var/tmp/vzdumptmp30265
CFG: size: 243 name: qemu-server.conf
DEV: dev_id=1 size: 137438953472 devname: drive-virtio0
CTIME: Fri Mar 15 09:00:49 2013
Formatting '/var/lib/vz/images/200/vm-200-disk-1.raw', fmt=raw size=137438953472
new volume ID is 'local:200/vm-200-disk-1.raw'
map 'drive-virtio0' to '/var/lib/vz/images/200/vm-200-disk-1.raw' (write zeros = 0)
----------------------------------------------------------------------


As you can see it restored it to raw... It did start and run fine though.

I cannot see this here. a qcow2 backup restores to a qcow2. Are you really sure that you got a qcow2 file in the original VM?
 
Tom,

Please look a little closer to my msg. There are two parts, the backup which shows its backing up a qcow2 file and then the restore which shows (in bold) it writing to a .raw file.

I would expect that a restore should restore to the original file format and in this case, the original file was qcow2.

Please review my prior post a little closer.

Thanks,

-Glen
 
Dietmar,

I mentioned in the msg that the original was a qcow2 file but your correct, the backup log does not show this. Since the restore changed it to a .raw file showing you the current config would only show it as raw. I will recreate it as qcow and complete the testing again and include to you the config file showing that it is indeed a qcow2 file but please note that I'm 100% sure it was a qcow2 file.

I will get back to you when I find time a little later to recreate the test.

Regards,

-Glen
 
I have a question about upgrading and updating individual hosts in a cluster.

I currently have a cluster of 9 hosts and about 60 guests total. The cluster hosts are currently running Proxmox v2.2-31. I would like to upgrade them to version 2.3.xx but there are several guest KVMs that are important and need to stay running. Also, I don't have a SAN and all guests live on DAS storage so live migration is not possible.

My question is, can I upgrade some of the hosts to v2.3 and leave some hosts at 2.2 for now? Would there be an issue having mixed versions of hosts in the same cluster?

Thanks!

-Glen
 
I haven't tried mixing 2.2 and 2.3, but I've upgraded my test cluster and the backup is indeed quite different. It's a big improvement (more efficient, doesn't require allocating free space in the VG to avoid running out of room in the snapshot before the backup finishes, works great over NFS). On the downside, I discovered that I had some corrupted .qcow2 images, and the backup breaks if the image is corrupt. I had always through that qemu-img check would detect all problems in a qcow2 image - not so. These files passed qemu-img check, but qemu-img convert fails when it hits the bad blocks, and backup fails. In a way this is good, since it gives you a good daily test of your images, but if you already have some problem images you'll be sad when you upgrade. Also, there's no going back - the backups created by 2.3 won't work on a 2.2 system, so if you mix 2.2 and 2.3 you'll be restricted as to which hosts you can restore 2.3 backups on.

Most people probably won't have many/any corrupted files - the ones on my test cluster must have broken while I was playing with iSCSI and had some problems. This happened so long ago that I don't have backups that predate the problem.
 
... On the downside, I discovered that I had some corrupted .qcow2 images, and the backup breaks if the image is corrupt. I had always through that qemu-img check would detect all problems in a qcow2 image - not so. These files passed qemu-img check, but qemu-img convert fails when it hits the bad blocks, and backup fails. ...

Hi,
perhaps you should try to copy the disk-file with clonezilla?!

Udo
 
I hadn't thought of clonezilla as a recovery method. In the corrupted-qcow2-image cases I've got the KVM guests run just fine - it looks like the damage is on the disk image somewhere after the actual data in most cases. I'll give this a try. In the meantime, I wanted to document that vzdump backups in 2.3 will break if the disk image (at least with qcow2 - don't know about raw) is corrupt. I wouldn't have known about the corruption otherwise, so this seems like a good thing in the long run since you do get a daily goodness test of the disk image that we didn't have before.
 
What determines which RBDs show up in storage? I want to import existing images that I have for use within proxmox. When I do "rbd ls" I see my original images + those created by proxmox, side-by-side. However, in proxmox, I only see images created by proxmox.

Thanks!
 
What determines which RBDs show up in storage? I want to import existing images that I have for use within proxmox. When I do "rbd ls" I see my original images + those created by proxmox, side-by-side. However, in proxmox, I only see images created by proxmox.

This is the wrong thread for such question.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!