First off, please upgrade to the latest Proxmox VE, there have been many improvements.
The MON field isn't writable if you edit an existing entry. If you create a new entry, then you need to fill in the MONs from the cluster. But you should be able to select which content on the drop down field.
proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve)
pve-manager: 6.1-5 (running version: 6.1-5/9bf06119)
pve-kernel-5.3: 6.1-1
pve-kernel-helper: 6.1-1
pve-kernel-5.0: 6.0-11
pve-kernel-4.15: 5.4-12
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-4.15.18-24-pve: 4.15.18-52
pve-kernel-4.15.18-21-pve: 4.15.18-48
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-1-pve: 4.15.18-19
pve-kernel-4.4.134-1-pve: 4.4.134-112
pve-kernel-4.4.35-1-pve: 4.4.35-77
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-2
pve-cluster: 6.1-2
pve-container: 3.0-16
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-4
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
Double click on an existing entry.i cannot edit existing entry?
i did a cache refreshDouble click on an existing entry.
And please refresh/clear your browser, adding a CephFS storage looks different, from what the screenshot is showing.
all ceph relates is manged and created inside proxmoxHow does the/etc/pve/storage.cfg
look like? And is it a hyper-converged cluster or external Ceph storage?
rbd: ceph-lxc
content images,rootdir
krbd 0
pool ceph-lxc
cephfs: cephfs-data
path /mnt/pve/cephfs-data
content backup,vztmpl,iso
yep 3 monitors, 3 managers, 3 of each:
cat /etc/pve/ceph.conf
, the MONs are read from there. i just hidden the ipscat /etc/pve/ceph.conf
, the MONs are read from there.![]()
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = xxx.xxx.yyy.235/24
fsid = 8ebca482-f985-4e74-9ff8-35e03a1af15e
mon_allow_pool_delete = true
mon_host = xxx.xxx.xxx.235 xxx.xxx.xxx.234 xxx.xxx.xxx.223
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = xxx.xxx.xxx.235/24
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[mds]
keyring = /var/lib/ceph/mds/ceph-$id/keyring
[mds.pve-srv3]
host = pve-srv3
mds_standby_for_name = pve
[mds.pve-srv2]
host = pve-srv2
mds standby for name = pve
[mds.pve-srv4]
host = pve-srv4
mds_standby_for_name = pve
ip is hiddenStrange. But the cluster has quorum,pvecm status
? Otherwise I am a little lost to why it doesn't display the MONs.
Cluster information
-------------------
Name: vq-pve
Config Version: 42
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Mon Jan 27 11:50:06 2020
Quorum provider: corosync_votequorum
Nodes: 10
Node ID: 0x00000008
Ring ID: 2.10bc
Quorate: Yes
Votequorum information
----------------------
Expected votes: 19
Highest expected: 19
Total votes: 19
Quorum: 10
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000002 1 xxx.xxx.xxx.213
0x00000003 10 xxx.xxx.xxx.236
0x00000004 1 xxx.xxx.xxx.132
0x00000005 1 xxx.xxx.xxx.131
0x00000006 1 xxx.xxx.xxx.138
0x00000007 1 xxx.xxx.xxx.139
0x00000008 1 xxx.xxx.xxx.235 (local)
0x00000009 1 xxx.xxx.xxx.234
0x0000000a 1 xxx.xxx.xxx.223
0x0000000b 1 xxx.xxx.xxx.176
i did not do anything. just tried to edit it again and it works, i can disable the ceph from the backup optionHm. Does it still look the same, when you edit the CephFS storage in the GUI?
We use essential cookies to make this site work, and optional cookies to enhance your experience.