So i have proxmox 5.4, and pve-qemu-kvm/now 2.12.1-3 amd64 .
Just had 2 vms crash when trying to open the console.
So from what i understand this is fixed in pve-qemu-kvm 3.0.1 ? And this is only in the pvetest repo ?
Has anyone tried it ?!
Well this is not clear to me.
If i add another node to the cluster, i can then make a ha group consisting of the "real" nodes right ?!
Then i go and replicate the disks from node a to node b (the real nodes).
If nodeA fails, the will get restarted on nodeB ?
This will "fix" just the votes problem (maybe i can have a vm with proxmox on another server).
But what about the fact that i dont have a shared storage?
So i have 2 servers, with proxmox 5.4 installed, with zfs disks, and configured in a cluster.
They are identical.
One very important thing is that i have no shared storage. So the vms run locally (as is, they have local disks.... so no live migration).
So i have some vms on node1.
From what i...
Yeah, it kind of hit me after i posted the thread. The vm would actually need to acces the same network of the ceph cluster, and i dont think this would be safe.
Anyone that would have access to this vm would have access to the ceph network.
So i have been doing a lot of tests with proxmox and ceph.
Im now thinking about a certain case, is it posible to use the ceph pools created inside a vm ? Or maybe a ceph fs ?
How should i go about it withought breaking everything :) ?
So in a sense i have managed to get everything working again, in a sense (more about that in a bit).
So my problem was that after restart the mon on the 2 failed nodes (that i have reinstalled) had isues with their service.
So the mon service on node 2 and 3 was not starting.
But, if i ran...
So i managed to get everything up from the only remaining node by using:
http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-mons/#removing-monitors
and this for the osds:
https://ceph.com/planet/recovering-from-a-complete-node-failure/
So everything was running : the proxmox cluster...
So it seems that the mon was running.
So i ran :
systemctl status ceph-mon@hp1-s1.service
Not i made the export of the monmap..
Il continue and get back here :)
So i have managed to recover a cluster that had one node dead. I mean i reinstalled the dead node and managed to re-add it to the proxmox cluster, and ceph cluster, using that link.
But i cannot seem to manage it with ony one node alive.
So im trying to export the mon map, and i get this ...
So how would i do that ?!
So i modified /etc/ceph/ceph.config so that only the one mon and host is defined.
I tried them "ceph status" but it does not do anything .... it just hangs.
Could i use maybe rbd or rados directly ?
So i have followed the ceph config tutorial.
I have made a cluster of 3 servers (identical), with a full mesh network.
Everything is ok.
If one node\server goes down everything is still ok. I can still use the cluster, in a degraded state.
But if only one node is up i cannot acces ceph anymore...
So after some investigation, and some a really helpful person named IcePic over at the ceph irc channel i think i have some answers.
So Ceph works with block devices. Everything goes to these block devices .
CephFS only exists to have the contents of these block devices "exposed" as a posix...
So im trying to configure a proxmox cluster with ceph.
So from what i can see i can make a pool directly and use it (as in add it to the cluster storage as RBD)
In order to create a CephFS storage in proxmox i need to create 2 sepparate ceph pools and then create the cephfs specifying the pool...
So yeah. Im kind of lost.
So i deleted everything.... so no pools.
I created the pools and fs manually with :
I then went to Storage in Proxmox Cluster and :
- added both clusterfs_data and clusterfs_metadata with RBD(PVE) storage type
- added clusterfs usint the clusterfs storage type
RBD...
So something does not fit.
So when i get to the point of creating a CephFS i already have a pool created.
I cannot use that from what i can see.
I can create new pools, and the create the fs with these commands:
ceph osd pool create cephfs_data <pg_num>
ceph osd pool create cephfs_metadata...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.