Search results

  1. B

    journalctl error messages, pve-ha-lrm & iscsid

    Dear oguz, thank you for your answer. No, 4 nodes have 7.1-6, the last node has 7.1-8. But I had this messages when 4 nodes ran with same 7.1-6 version. thank you,
  2. B

    journalctl error messages, pve-ha-lrm & iscsid

    Dear Members, I have a 5 members cluster, and I have 2 strange error messages in syslog, journalctl. ===================================================== Jan 05 13:45:17 pve1 iscsid[1652055]: conn 0 login rejected: target error (03/01) Jan 05 13:45:17 pve1 pve-ha-lrm[1874176]: vm 132 - unable...
  3. B

    Try to fence node; Reached target Shutdown

    Dear Members, I have a random crash of one of node in cluster. This node is pve3 I have message in syslog, "Reached target Shutdown." What causes it? I read in forum, it can causes nfs storage under VM, I don't have nfs under vm. After restart mon is down on this node. What can be wrong...
  4. B

    Ceph RBD cluster, link: X is down

    Dear Aaron, 5 nodes will be in this cluster. Thank you for your answer, and thank you for all your hard work on this. (-:
  5. B

    Ceph RBD cluster, link: X is down

    Yes, sure, thank you! - Is 1Gbit separated NIC enough for corosync, right? - I think I would use two 1Gbit NICs in bonding mode for corosync. Is Active-Backup bonding mode recommended for corosync? thank you, Gabor
  6. B

    Ceph RBD cluster, link: X is down

    Dear Aaron, thank you for your answer. Yes, I use 10.10.10.x network for Ceph: Are Corosync network and cluster network same definition? I used 10.10.10.x network for create cluster. At cluster creation time Ceph network is created on same 10.10.10.x network? [global]...
  7. B

    Ceph RBD cluster, link: X is down

    Dear Members, I have a ceph cluster with the followed details: Cluster works on separated NIC, active-backup bonding, separated DELL 10G switch, and separated IP range on 10Gbit. My problem: On all of nodes there are some KNET link down entries when there is heavy load on some of node. I don't...
  8. B

    DB/WAL Disk configuration

    Dear Members, there is a cluster with 3 nodes with the following hw configuration: - Dell R730 - PERC H730 raid controller - 256GB RAM - 4 x 1.9TB 12Gb SAS SSD for OSDs - 2 x 4TB 6Gb SAS HDD for OSDs - 2 x 800GB 12Gb SAS SSD for DB/WAL disk. Raid controller is working in HBA mode. In this mode...
  9. B

    run vm on single node with crashed cluster on ceph storage

    Dear Alwin, thank you for your post. As I wrote I set quorum for 1 node: -------------------------------------------------- root@pve1:~# pvecm status Cluster information ------------------- Name: corexcluster Config Version: 3 Transport: knet Secure auth: on Quorum...
  10. B

    run vm on single node with crashed cluster on ceph storage

    Dear Members, Dear Staff, I have to check disaster recovery procedure on a 3 nodes (pve1, pve2, pve3) cluster with ceph (RBD storage). Everything works fine, in case of one node failure the cluster works as expected. I would like to test starting VM's on a single node with crashed cluster...
  11. B

    Start a vm without turning on all nodes

    Dear Members, Dear Staff, I have to check disaster recovery procedure on a 3 nodes (pve1, pve2, pve3) cluster with ceph (RBD storage). Everything works fine, in case of one node failure the cluster works as expected. I would like to test starting VM's on a single node without cluster. This is...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!