Hi all,
We have 12 node cluster with 6x ceph nodes and 6x compute nodes.(ceph nodes not running any vms). corosync connected via two links. one dedicated link and one shared (mgmt) link.
Ceph configured with dedicated backend sync network as well
We are changing some of the switch config and...
Hi All,
I'm using 3x pve 7.1-10 nodes as a cluster. I already connected synology box as shared storage. all features are working properly at the moment.
I'm working on moving some workloads from failing redhat openstack cluster with ceph storage to the pve cluster. As I did many times...
Hi All,
Recently we were trying to setup F5 BIG IP VE edition on our Proxmox cluster purely for L2 optimization on some links. We created vm with two interfaces. each interface attached to Linux bridge like this.
router1 -> f5(on proxmox) - > router2
eno1 -> vmbr1 (router1 to proxmox)
eno2 -...
Hi all,
I'm in the middle of creating my cluster and adding new nodes. I kept pve-ha-lrm and pve-ha-crm services disabled since I'm removing cables and rebooting nodes etc. I need all nodes not to fence
In the middle of deployment one of my team added vms to HA resources. I need to delete...
Hi all,
My pve setup got 3x storage nodes where I installed ceph nautilus 14.2.9. I went through the GUI installation and created 3x mons after the initial ceph installation.
Right away after add 3x monitors it warns me "clock skew detected on mon.01 and mon.02. I configured NTP and time is...
Hi All,
I’m setting up ceph cluster with 3x node pve 6.2. each node got following disks
7x 6TB 7200 Enterprise SAS HDD
2x 3TB Enterprise SAS SSD
2x 400GB Enterprise SATA SSD
This setup previously used for old ceph (with filestore) cluster where it configured to use 2x 400GB SATA SSD to...
Hi all,
I'm testing new setup where I have 3x storage boxes(with multiples drives) and 6x compute boxes (with two drives + more RAM) all are part of same cluster.
I installed ceph on 3x storage nodes and added all the free drives from it as OSDs. ceph is up and running and I can use mounted...
Hi all,
I'm running a 4 node pve cluster (6-1.3) with ovs installed. All networks configured using ovs and all interfaces are LACP bonded.
OVS BOND (LACP) -> OVS BRIDGE -> VMS
-> Intports
Recently I noticed Windows VMs using intel E1000 drivers disconnecting after some...
Hi,
In my proxmox network configuration, I have 4 LACP bonded interface as follows.
bond0 -> for management
bond1 -> for cluster network
bond2 -> storage
bond3 -> external
All of those connected to juniper switch and carry multiple vlans. I need some of that VLAN on my pve host as well as...
Hi all,
We are using openstack for most of our production instances with ceph storage backend. Recently we added additional hardware and setup Proxmox v6 and attached it to the same ceph storage cluster
With the ceph storage integration we tested couple of instance and it works perfectly fine...
I have two severs (px1 and px2) which connect to same switch (Juniper QFX) with LACP. each server have 2 physical ports and those bonded to bond0 using LACP. I used default Linux bond (not OVS).
After create the bond I created VLAN interface on that bond like below
first server (px1)...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.