We have 12 node cluster with 6x ceph nodes and 6x compute nodes.(ceph nodes not running any vms). corosync connected via two links. one dedicated link and one shared (mgmt) link.
Ceph configured with dedicated backend sync network as well
We are changing some of the switch config and...
Thanks for your response.
There is no other PVE nodes connected to this ceph cluster. only RH openstack compute nodes.
I connect same type of RH ceph storage (not the same version) to another pve cluster before for different project but it was running pve 6.x
Is there any way I can check...
No. the pve cluster I'm trying to connect got 3 nodes. but when I add RBD storage I only added to one node (pve-01)
but this ceph cluster is part of redhat openstack which used by 6 other compute node from openstack cluster. The pool I tried to connect to pve is not used by...
I'm using 3x pve 7.1-10 nodes as a cluster. I already connected synology box as shared storage. all features are working properly at the moment.
I'm working on moving some workloads from failing redhat openstack cluster with ceph storage to the pve cluster. As I did many times...
Recently we were trying to setup F5 BIG IP VE edition on our Proxmox cluster purely for L2 optimization on some links. We created vm with two interfaces. each interface attached to Linux bridge like this.
router1 -> f5(on proxmox) - > router2
eno1 -> vmbr1 (router1 to proxmox)
Sorry for bringing this up on an old topic.
After doing this and start VMs on another node on the cluster, How can I safely bring up the stopped/crashed server back?
I get that if I start the stopped server again it will corrupt the volumes of VMs that I already moved to another...
I'm testing the datastore backed by NFS share from Truenas core and Qnap NAS. I already set NFS options soft while mounting. my backups are running ok. But what is the proper way to mount NFS storage for the datastore? -o soft,local_lock=all ?
Also I'm not sure which options I need to...
A few months back I had the same issue. our whole cluster rebooted while I was trying to add a new node.
We are not using ceph but Hitachi iscsi storage as shared storage with multipathing. I prepared the new node with the same configuration as other cluster nodes (all same model HPE...
I managed to fix this by following this thread.
Since my pve-ha-lrm and pve-ha-crm already disabled on all nodes, I delete manager_status file from single node. after that I able to remove the...
I'm in the middle of creating my cluster and adding new nodes. I kept pve-ha-lrm and pve-ha-crm services disabled since I'm removing cables and rebooting nodes etc. I need all nodes not to fence
In the middle of deployment one of my team added vms to HA resources. I need to delete...
My NTP was not working even I configured properly due to firewall rule. I fix that and reinstalled the ceph clean. Now all working properly. I can create ceph-mgr and ceph-mons without any timeout issues. And no more " clock skew detected " warning also.
My pve setup got 3x storage nodes where I installed ceph nautilus 14.2.9. I went through the GUI installation and created 3x mons after the initial ceph installation.
Right away after add 3x monitors it warns me "clock skew detected on mon.01 and mon.02. I configured NTP and time is...
I’m setting up ceph cluster with 3x node pve 6.2. each node got following disks
7x 6TB 7200 Enterprise SAS HDD
2x 3TB Enterprise SAS SSD
2x 400GB Enterprise SATA SSD
This setup previously used for old ceph (with filestore) cluster where it configured to use 2x 400GB SATA SSD to...