Search results

  1. S

    Remove corosync ring 0(link0) network without rebooting nodes

    Hi all, We have 12 node cluster with 6x ceph nodes and 6x compute nodes.(ceph nodes not running any vms). corosync connected via two links. one dedicated link and one shared (mgmt) link. Ceph configured with dedicated backend sync network as well We are changing some of the switch config and...
  2. S

    [SOLVED] Failed to add unused disk from external ceph storage

    Hi All, I'm using 3x pve 7.1-10 nodes as a cluster. I already connected synology box as shared storage. all features are working properly at the moment. I'm working on moving some workloads from failing redhat openstack cluster with ceph storage to the pve cluster. As I did many times...
  3. S

    Deploying layer2 device in transparent mode

    Hi All, Recently we were trying to setup F5 BIG IP VE edition on our Proxmox cluster purely for L2 optimization on some links. We created vm with two interfaces. each interface attached to Linux bridge like this. router1 -> f5(on proxmox) - > router2 eno1 -> vmbr1 (router1 to proxmox) eno2 -...
  4. S

    [SOLVED] Remove HA entries while HA services are disabled

    Hi all, I'm in the middle of creating my cluster and adding new nodes. I kept pve-ha-lrm and pve-ha-crm services disabled since I'm removing cables and rebooting nodes etc. I need all nodes not to fence In the middle of deployment one of my team added vms to HA resources. I need to delete...
  5. S

    [SOLVED] Unable to create ceph-mgr or OSD on new setup

    Hi all, My pve setup got 3x storage nodes where I installed ceph nautilus 14.2.9. I went through the GUI installation and created 3x mons after the initial ceph installation. Right away after add 3x monitors it warns me "clock skew detected on mon.01 and mon.02. I configured NTP and time is...
  6. S

    [SOLVED] Ceph bluestore WAL and DB size – WAL on external SSD

    Hi All, I’m setting up ceph cluster with 3x node pve 6.2. each node got following disks 7x 6TB 7200 Enterprise SAS HDD 2x 3TB Enterprise SAS SSD 2x 400GB Enterprise SATA SSD This setup previously used for old ceph (with filestore) cluster where it configured to use 2x 400GB SATA SSD to...
  7. S

    [SOLVED] Dedicated ceph storage nodes with HA stack disabled

    Hi all, I'm testing new setup where I have 3x storage boxes(with multiples drives) and 6x compute boxes (with two drives + more RAM) all are part of same cluster. I installed ceph on 3x storage nodes and added all the free drives from it as OSDs. ceph is up and running and I can use mounted...
  8. S

    [SOLVED] LVM on ISCSI Multipath not accessible after node reboot

    Hi all, I'm testing ISCSI mutipath setup with 3 node proxmox cluster my setup is like below 1x Debian with tgt 4x nics (2xLUNS) <target iqn.2020-03.pvelab.srv:tar01> backing-store /dev/sdb backing-store /dev/sdc initiator-address 192.168.132.0/24 </target> 3x PVE 6.1 with...
  9. S

    [SOLVED] VMs loosing network after some time

    Hi all, I'm running a 4 node pve cluster (6-1.3) with ovs installed. All networks configured using ovs and all interfaces are LACP bonded. OVS BOND (LACP) -> OVS BRIDGE -> VMS -> Intports Recently I noticed Windows VMs using intel E1000 drivers disconnecting after some...
  10. S

    [SOLVED] Connect LACP Linux bond with bridge and vlan at the same time

    Hi, In my proxmox network configuration, I have 4 LACP bonded interface as follows. bond0 -> for management bond1 -> for cluster network bond2 -> storage bond3 -> external All of those connected to juniper switch and carry multiple vlans. I need some of that VLAN on my pve host as well as...
  11. S

    [SOLVED] Migrate instance from openstack to proxmox with ceph storage backend

    Hi all, We are using openstack for most of our production instances with ceph storage backend. Recently we added additional hardware and setup Proxmox v6 and attached it to the same ceph storage cluster With the ceph storage integration we tested couple of instance and it works perfectly fine...
  12. S

    [SOLVED] No ARP reply when using VLAN Linux bond with LACP

    I have two severs (px1 and px2) which connect to same switch (Juniper QFX) with LACP. each server have 2 physical ports and those bonded to bond0 using LACP. I used default Linux bond (not OVS). After create the bond I created VLAN interface on that bond like below first server (px1)...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!