[SOLVED] edit ceph-fs cluster storage settings

ilia987

Active Member
Sep 9, 2019
275
13
38
37
i created cephfs storage and i want to disable the option for VZDUMP,
but i cannot disable it via gui, there are not option set for monitors

any solutions?
 

Attachments

  • Screenshot from 2020-01-20 11-18-57.png
    Screenshot from 2020-01-20 11-18-57.png
    22.8 KB · Views: 13
First off, please upgrade to the latest Proxmox VE, there have been many improvements.

The MON field isn't writable if you edit an existing entry. If you create a new entry, then you need to fill in the MONs from the cluster. But you should be able to select which content on the drop down field.
 
First off, please upgrade to the latest Proxmox VE, there have been many improvements.

The MON field isn't writable if you edit an existing entry. If you create a new entry, then you need to fill in the MONs from the cluster. But you should be able to select which content on the drop down field.

i cannot edit existing entry?


i am on latest (nightly update)
Code:
proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve)
pve-manager: 6.1-5 (running version: 6.1-5/9bf06119)
pve-kernel-5.3: 6.1-1
pve-kernel-helper: 6.1-1
pve-kernel-5.0: 6.0-11
pve-kernel-4.15: 5.4-12
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-4.15.18-24-pve: 4.15.18-52
pve-kernel-4.15.18-21-pve: 4.15.18-48
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-1-pve: 4.15.18-19
pve-kernel-4.4.134-1-pve: 4.4.134-112
pve-kernel-4.4.35-1-pve: 4.4.35-77
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-2
pve-cluster: 6.1-2
pve-container: 3.0-16
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-4
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
 
i cannot edit existing entry?
Double click on an existing entry.

And please refresh/clear your browser, adding a CephFS storage looks different, from what the screenshot is showing.
 
Double click on an existing entry.

And please refresh/clear your browser, adding a CephFS storage looks different, from what the screenshot is showing.
i did a cache refresh
after double click i get what shown in the picture in the first post
 
The MON field should be filled on an existing entry. Is that storage working?
 
yep ceph is running and quite fest
2GB RANDOM READ WRITE :)

just to fill with :
"mon.pve-srv2"
 

Attachments

  • Screenshot from 2020-01-20 12-06-01.png
    Screenshot from 2020-01-20 12-06-01.png
    28.9 KB · Views: 6
How does the /etc/pve/storage.cfg look like? And is it a hyper-converged cluster or external Ceph storage?
 
How does the /etc/pve/storage.cfg look like? And is it a hyper-converged cluster or external Ceph storage?
all ceph relates is manged and created inside proxmox
storage.csf relevant items:
Code:
rbd: ceph-lxc
    content images,rootdir
    krbd 0
    pool ceph-lxc
   
cephfs: cephfs-data
    path /mnt/pve/cephfs-data
    content backup,vztmpl,iso
 
Are there MONs in the ceph.conf? In any case, you can remove the 'backup' from the content line and it will not list the storage for backups anymore.
 
cat /etc/pve/ceph.conf, the MONs are read from there. ;)
i just hidden the ips
Code:
[global]
     auth_client_required = cephx
     auth_cluster_required = cephx
     auth_service_required = cephx
     cluster_network = xxx.xxx.yyy.235/24
     fsid = 8ebca482-f985-4e74-9ff8-35e03a1af15e
     mon_allow_pool_delete = true
     mon_host = xxx.xxx.xxx.235 xxx.xxx.xxx.234 xxx.xxx.xxx.223
     osd_pool_default_min_size = 2
     osd_pool_default_size = 3
     public_network = xxx.xxx.xxx.235/24

[client]
     keyring = /etc/pve/priv/$cluster.$name.keyring

[mds]
     keyring = /var/lib/ceph/mds/ceph-$id/keyring

[mds.pve-srv3]
     host = pve-srv3
     mds_standby_for_name = pve

[mds.pve-srv2]
     host = pve-srv2
     mds standby for name = pve

[mds.pve-srv4]
     host = pve-srv4
     mds_standby_for_name = pve
 
Strange. But the cluster has quorum, pvecm status? Otherwise I am a little lost to why it doesn't display the MONs.
 
Strange. But the cluster has quorum, pvecm status? Otherwise I am a little lost to why it doesn't display the MONs.
ip is hidden
Code:
Cluster information
-------------------
Name:             vq-pve
Config Version:   42
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Mon Jan 27 11:50:06 2020
Quorum provider:  corosync_votequorum
Nodes:            10
Node ID:          0x00000008
Ring ID:          2.10bc
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   19
Highest expected: 19
Total votes:      19
Quorum:           10 
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000002          1 xxx.xxx.xxx.213
0x00000003         10 xxx.xxx.xxx.236
0x00000004          1 xxx.xxx.xxx.132
0x00000005          1 xxx.xxx.xxx.131
0x00000006          1 xxx.xxx.xxx.138
0x00000007          1 xxx.xxx.xxx.139
0x00000008          1 xxx.xxx.xxx.235 (local)
0x00000009          1 xxx.xxx.xxx.234
0x0000000a          1 xxx.xxx.xxx.223
0x0000000b          1 xxx.xxx.xxx.176
 
Hm. Does it still look the same, when you edit the CephFS storage in the GUI?
 
  • Like
Reactions: ilia987
Hm. Does it still look the same, when you edit the CephFS storage in the GUI?
i did not do anything. just tried to edit it again and it works, i can disable the ceph from the backup option
thanks, but i still dont know what relay fixed it
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!