Sheepdog 0.9

hybrid512

Active Member
Jun 6, 2013
76
4
28
Hi,

I encountered recently a few problems with Sheepdog storage and snapshots.
We use snapshots extensively and ProxMox uses it too for backup purpose which is great but Sheepdog 0.8.2 which is the latest release provided within pve-sheepdog is not very smart snapshot wise and it is keeping garbaged data when snapshots are removed and these data are never purged using storage space for nothing.

As the authors recommended themselvesn it would be better to upgrade to 0.9 which is considered stable.

Is the pve-sheepdog package going to be upgraded soon ? I'd prefer stay on a minimally modified setup with PVE packages instead of installing an alternative Sheepdog repo.

Regards.
 
Hi,
I'll try to update the package soon.

(Please note that sheepdog is not production ready, upgrade from 0.8->0.9 is not possible (format change), so you need to backup your vm , upgrade sheepdog, and restore)
 
Hi,

Sheepdog 0.9 as supplied in the pve-sheepdog package, until latest release (0.9.1-1) is not completely up to date.
With 0.9, Sheepdog changed its locking mechanism which breaks the live migration feature as explained here (http://lists.wpkg.org/pipermail/sheepdog-users/2015-March/002966.html).
A patch has been backported to the 0.9 branch which disables the lock mechanism by default and so, make the live migration works as before.

So, ok, you'll have to re-format the whole cluster for that but at least, it is possible and it will make Sheepdog fully useable.

Can you update the pve-sheepdog package to this release please ?

Regards.
 
Hi,

The patch has been backported to 0.9-stable just now, would it be possible to update the pve-sheepdog package please ?

I'm currently in the process of building a whole new ProxMox cluster and I'd love to use Sheepdog for that.

Best regards.
 
Last edited:
I just uploaded a new version to the pvetest repository - please test:

# wget http://download.proxmox.com/debian/dists/wheezy/pvetest/binary-amd64/pve-sheepdog_0.9.2-1_amd64.deb
# dpkg -i pve-sheepdog_0.9.2-1_amd64.deb

So far so good :)
Installed and fully working again : VM live migration is functionnal as for live image migration between storage (from local to Sheepdog or the contrary)

For me, it is working as expected so if you want to promote it to the main repository, feel free to do so.

Thank you so much for your quick help.
 
Hummm ... found a bug, nothing unbearable but annoying.
As I said, live migration of VM/image are working well but, in the case of moving the VM's hard drive image live from Sheepdog to another storage (I did this from Sheepdog to GlusterFS) and asking ProxMox to delete the source image, the migration goes well, the file is copied to the destination storage and the VMs is still alive and working as if nothing happened but, ProxMox doesn't delete the source image.
If it left it "as is", I would say that the deletion process is simply not working but this is a bit more complicated ...

As you can see, here is my image list using the "dog vdi list" command from Sheepdog done before migration :

Code:
dog vdi list
  Name        Id    Size    Used  Shared    Creation time   VDI id  Copies  Tag
  vm-101-disk-1     0   32 GB   15 GB  0.0 MB 2015-05-14 12:09   468e50    4:2

As you can see, this is a 32Gb image with 15Gb really used.

Next, I ask ProxMox to move the file from Sheepdog to GlusterFS with the GUI with automatic image deletion at the source, when the process finishes, here is what I get from the same command as above :
Code:
dog vdi list
  Name        Id    Size    Used  Shared    Creation time   VDI id  Copies  Tag
  vm-101-disk-1     0   32 GB  512 MB  0.0 MB 2015-05-14 12:09   468e50    4:2

As you can see, the image remains but it is now only 512Mb really used, as if the sparse file had been "truncated". The new file on GlusterFS is there and working as expected.

Now, last, I move the file back from GlusterFS to Sheepdog with delettion of the file at the source, as the file already exists, it creates another file :

Code:
dog vdi list
  Name        Id    Size    Used  Shared    Creation time   VDI id  Copies  Tag
  vm-101-disk-1     0   32 GB  512 MB  0.0 MB 2015-05-14 12:09   468e50    4:2              
  vm-101-disk-2     0   32 GB   15 GB  0.0 MB 2015-05-15 12:52   469003    4:2

The old image doesn't appear in the VM hardware view, not even as an "unused disk" but it appears in the Storage view and count for 32Gb (as for the new file too) instead of the 512Mb that are really used. (Anyway, might be a good idea to revise the way disk usage is presented in the storage view since it only display the "logical" size of the sparse file but not the physical really used space ...)

To me, it seems this is a bug in the process in conjunction with Sheepdog because, if I delete the file from the storage view, it is effectively deleted on Sheepdog, only when I ask ProxMox to delete it right after the disk move process, there it stays and it is "truncated".

There is a simple workaround here : just don't ask the file to be automatically deleted at the end of the process and delete it manually once the process is done.

Regards.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!