Thanks for the reply.
Is their any risk doing this while nodes are running specially ceph ?
Any ceph related communication happens through this link as well ?
Hi all,
We have 12 node cluster with 6x ceph nodes and 6x compute nodes.(ceph nodes not running any vms). corosync connected via two links. one dedicated link and one shared (mgmt) link.
Ceph configured with dedicated backend sync network as well
We are changing some of the switch config and...
Thanks for your response.
There is no other PVE nodes connected to this ceph cluster. only RH openstack compute nodes.
I connect same type of RH ceph storage (not the same version) to another pve cluster before for different project but it was running pve 6.x
Is there any way I can check...
Hi Aaron,
No. the pve cluster I'm trying to connect got 3 nodes. but when I add RBD storage I only added to one node (pve-01)
but this ceph cluster is part of redhat openstack which used by 6 other compute node from openstack cluster. The pool I tried to connect to pve is not used by...
Hi All,
I'm using 3x pve 7.1-10 nodes as a cluster. I already connected synology box as shared storage. all features are working properly at the moment.
I'm working on moving some workloads from failing redhat openstack cluster with ceph storage to the pve cluster. As I did many times...
Hi All,
Recently we were trying to setup F5 BIG IP VE edition on our Proxmox cluster purely for L2 optimization on some links. We created vm with two interfaces. each interface attached to Linux bridge like this.
router1 -> f5(on proxmox) - > router2
eno1 -> vmbr1 (router1 to proxmox)
eno2 -...
@fabian
Sorry for bringing this up on an old topic.
After doing this and start VMs on another node on the cluster, How can I safely bring up the stopped/crashed server back?
I get that if I start the stopped server again it will corrupt the volumes of VMs that I already moved to another...
Hi,
I'm testing the datastore backed by NFS share from Truenas core and Qnap NAS. I already set NFS options soft while mounting. my backups are running ok. But what is the proper way to mount NFS storage for the datastore? -o soft,local_lock=all ?
Also I'm not sure which options I need to...
Hi,
A few months back I had the same issue. our whole cluster rebooted while I was trying to add a new node.
We are not using ceph but Hitachi iscsi storage as shared storage with multipathing. I prepared the new node with the same configuration as other cluster nodes (all same model HPE...
For me similar setup working without any issues after I enable Use sticky connections
On pfsense system -> advanced -> miscellaneous -> load balancing -> Use sticky connections
Hi all,
I managed to fix this by following this thread.
https://forum.proxmox.com/threads/cannot-delete-ha-resources-since-pve-5-to-6-update.58151/
Since my pve-ha-lrm and pve-ha-crm already disabled on all nodes, I delete manager_status file from single node. after that I able to remove the...
Hi all,
I'm in the middle of creating my cluster and adding new nodes. I kept pve-ha-lrm and pve-ha-crm services disabled since I'm removing cables and rebooting nodes etc. I need all nodes not to fence
In the middle of deployment one of my team added vms to HA resources. I need to delete...
Hi All,
My NTP was not working even I configured properly due to firewall rule. I fix that and reinstalled the ceph clean. Now all working properly. I can create ceph-mgr and ceph-mons without any timeout issues. And no more " clock skew detected " warning also.
Hi all,
My pve setup got 3x storage nodes where I installed ceph nautilus 14.2.9. I went through the GUI installation and created 3x mons after the initial ceph installation.
Right away after add 3x monitors it warns me "clock skew detected on mon.01 and mon.02. I configured NTP and time is...
Hi All,
I’m setting up ceph cluster with 3x node pve 6.2. each node got following disks
7x 6TB 7200 Enterprise SAS HDD
2x 3TB Enterprise SAS SSD
2x 400GB Enterprise SATA SSD
This setup previously used for old ceph (with filestore) cluster where it configured to use 2x 400GB SATA SSD to...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.