Hallo, nein nicht wirklich, hab bei proxmox support ein Ticket dafür noch offen.
Es gibt einen Lösungsweg um die VMs wieder bootbar zu machen, hab dafür eine Anleitung für uns intern erstellt.
Aber was da passiert ist, weiß ich nicht wirklich.
D. h. das aufsplitten der Backupjobs hat es...
Hi @e100
We have the same issues like your screenshots above. On SLES and on Ubuntu, all messages on console are like the same.
Did you fix that problem?
And sometimes the vms freeze during backup, did you also have had this problem too?
best regards,
roman
Hi Fireon,
eine Frage, hast Du Erfahrung mit DELL USVs? Diese würden wir gerne mit apcupsd zum laufen bringen, aber leider geht das nicht. Hast du Erfahrungen mit usbhid-ups?
Viele Grüße,
Roman
Hello togehther!
We have a five node cluster with ceph. pve1 to pve5. The problem is, that pve1 is sometimes not "available". The osd´s of the node are working, the pve1 is available via ping but not via ssh. Here is a screenshot about the problem node - does anyone now why it is sometimes or...
Hallo zusammen!
Wir haben ca. 20 VMs in einem dreier Cluster mit Ceph am laufen.
Die hälfte der VMs wurde mit Clonezilla migriert, P2V, die andere hälfte habe ich zu raw convertiert von vmware.
Nun das Problem. Vor ein paar Wochen hat sich eine Ubuntu 18.04. VM aufgehängt, diese wurde von vmware...
Hello!
We have a three node cluster, the storage for the vms is ceph. I have migrated lot of physical server to pve with clonezilla, i have also converted round about 15 vmware vms to pve. In the past without issues.
Now we had the problem (the third problem/server after a while) - an ubuntu...
I dont know what you wanna say with this picture, but yes of course, ceph is a shared storage solution that can be used in a production environment. We do that since more than five years!
Read this about ceph about RAID..
Avoid RAID
As Ceph handles data object redundancy and multiple parallel...
What you mean with osd heartbeat? OSDs for Ceph should have no raid! this works not so good.. for ceph you must have all disk connectet via sata port. Raid is a bottleneck for ceph
with three nodes and ceph min size 1 will not correctly work i guess. Make standard, size = 3 and min size = 2. If you change e. g. the size to 2 and migrate a vm, the ceph cluster is in read only mode, because of the size 2 in a three node cluster.
U want to live migrate the running vms to the new larger node?
I did an upgrade from 5.4 to 6 without any issues. I moved the disks from ceph storage to local. After upgrade to 6 and to ceph nautilus, i moved it back to ceph.
But i have not tested a cluster 5.4 with one node with pve6. It...
Hi!
What is your configuration? The informations are a little bit rare.
How many nodes do you have, how many OSDs per host, ceph cluster is in a physically separated network?
ceph.conf? OSD heartbeat?
And please change the topic "Tutorial", i think thats wrong.
regards,
roman
I had this "issue" a long time ago, with comma it dont worked (pve 2.x)
ok, i have thought, that the cluster network - this is in our configuration - is the same as the monitors, and the monitors are not in the same cluster network.
After i installed ceph, the ceph cluster is the same as the...
The best thing is that you use two switches and without OpenvSwitch. A switch for the nodes cluster and the other switch for Ceph cluster, the two 10 GB / s NICs must be physically separated.
Please check this guide, then you must have the full network speed.
best regards,
roman
It is not a requirement. in my environment are all OSDs are SSDs, every SSD has its own journal.
Yes, try Gluster and you will see after that what you prefere :-)
I still can only recommend ceph :rolleyes:
best regards
maybe this make sence, one ssd for journal for all SAS disks per node, then you have more performance. But be careful, if journal is broken or down, all disks per node are "dead"
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.