Dear oguz,
thank you for your answer.
No, 4 nodes have 7.1-6, the last node has 7.1-8.
But I had this messages when 4 nodes ran with same 7.1-6 version.
thank you,
Dear Members,
I have a 5 members cluster, and I have 2 strange error messages in syslog, journalctl.
=====================================================
Jan 05 13:45:17 pve1 iscsid[1652055]: conn 0 login rejected: target error (03/01)
Jan 05 13:45:17 pve1 pve-ha-lrm[1874176]: vm 132 - unable...
Dear Members,
I have a random crash of one of node in cluster. This node is pve3
I have message in syslog, "Reached target Shutdown."
What causes it?
I read in forum, it can causes nfs storage under VM, I don't have nfs under vm.
After restart mon is down on this node.
What can be wrong...
Yes, sure, thank you!
- Is 1Gbit separated NIC enough for corosync, right?
- I think I would use two 1Gbit NICs in bonding mode for corosync. Is Active-Backup bonding mode recommended for corosync?
thank you,
Gabor
Dear Aaron,
thank you for your answer.
Yes, I use 10.10.10.x network for Ceph:
Are Corosync network and cluster network same definition?
I used 10.10.10.x network for create cluster.
At cluster creation time Ceph network is created on same 10.10.10.x network?
[global]...
Dear Members,
I have a ceph cluster with the followed details:
Cluster works on separated NIC, active-backup bonding, separated DELL 10G switch, and separated IP range on 10Gbit.
My problem:
On all of nodes there are some KNET link down entries when there is heavy load on some of node.
I don't...
Dear Members,
there is a cluster with 3 nodes with the following hw configuration:
- Dell R730
- PERC H730 raid controller
- 256GB RAM
- 4 x 1.9TB 12Gb SAS SSD for OSDs
- 2 x 4TB 6Gb SAS HDD for OSDs
- 2 x 800GB 12Gb SAS SSD for DB/WAL disk.
Raid controller is working in HBA mode.
In this mode...
Dear Alwin,
thank you for your post.
As I wrote I set quorum for 1 node:
--------------------------------------------------
root@pve1:~# pvecm status
Cluster information
-------------------
Name: corexcluster
Config Version: 3
Transport: knet
Secure auth: on
Quorum...
Dear Members, Dear Staff,
I have to check disaster recovery procedure on a 3 nodes (pve1, pve2, pve3) cluster with ceph (RBD storage).
Everything works fine, in case of one node failure the cluster works as expected.
I would like to test starting VM's on a single node with crashed cluster...
Dear Members, Dear Staff,
I have to check disaster recovery procedure on a 3 nodes (pve1, pve2, pve3) cluster with ceph (RBD storage).
Everything works fine, in case of one node failure the cluster works as expected.
I would like to test starting VM's on a single node without cluster.
This is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.