Here is a way to fix the problem when encountered :
on any node with access to ceph :
rbd info vm-100-disk-1
rbd image 'vm-100-disk-1':
size 1024 TB in 268435456 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.82072ae8944a
format: 2...
Hi,
I just found a nasty bug when using ProxMox 4.2 in a clustered setup with Ceph and a KRBD configured storage.
Using KRBD, a /dev/rbdxx entry is created on the server to gain access to the RBD image.
When migrating a VM using such volume from server A to server B, the /dev/rbdxx device is...
Hi,
Is there a way to redefined the default hardware settings for a freshly created VM ?
I mean, rght now, I create dozens of VMs but when I create new VM (and not clone an already existing one), I have to redefin my hardware settings (especially I replace IDE with virtio-scsi, use virtio for...
Hi,
I see that latest pve-sheepdog installs zookeeper dependencies but I tried to configure Sheepdog to use Zookeeper instead of Corosync but it fails because Zookeeper's support is not compiled in pve-sheepdog.
Starting Sheepdog Server : sheepdog-c zookeeper:node1:2181,node2:2181,node3:2181...
Hummm ... found a bug, nothing unbearable but annoying.
As I said, live migration of VM/image are working well but, in the case of moving the VM's hard drive image live from Sheepdog to another storage (I did this from Sheepdog to GlusterFS) and asking ProxMox to delete the source image, the...
So far so good :)
Installed and fully working again : VM live migration is functionnal as for live image migration between storage (from local to Sheepdog or the contrary)
For me, it is working as expected so if you want to promote it to the main repository, feel free to do so.
Thank you so...
Hi,
The patch has been backported to 0.9-stable just now, would it be possible to update the pve-sheepdog package please ?
I'm currently in the process of building a whole new ProxMox cluster and I'd love to use Sheepdog for that.
Best regards.
Hi,
Sheepdog 0.9 as supplied in the pve-sheepdog package, until latest release (0.9.1-1) is not completely up to date.
With 0.9, Sheepdog changed its locking mechanism which breaks the live migration feature as explained here...
Hi,
I encountered recently a few problems with Sheepdog storage and snapshots.
We use snapshots extensively and ProxMox uses it too for backup purpose which is great but Sheepdog 0.8.2 which is the latest release provided within pve-sheepdog is not very smart snapshot wise and it is keeping...
No it's not but building a shared GlusterFS is very easy and can be done on the ProxMox nodes directly (so no need to have a tier shared network storage) but off course, this has its disadvantages and is not failsafe unless you have everything needed to handle failures properly (such as fencing...
Fencing just for a floating service IP ???
I mean, this is not something critical, this is just a facility to me so that I can just use that IP to access ProxMox web management UI instead of one of the nodes IP.
Hi,
I did this and it still doesn't work :(
Here is my cluster.conf :
I don't get any error and the HA tab display my modifications without any problem except that the service IP doesn't work (unable to ping it).
I even rebooted the whole cluster but still, no service IP up and running :(...
Hi,
I'm making some tests with ProxMox 3 clustering.
I built a cluster with 3 nodes and created a shared GlusterFS volume as a shared storage which is working quite good.
Clustering is working well and I can administer the whole cluster from any node but I'd rather use a service IP that would...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.