i am planning to add few more servers to our cluster and most of them are with intention to extend our ceph cluster:
the chassis in plan is https://www.supermicro.com/en/products/system/2U/6028/SSG-6028R-E1CR24L.cfm:
this motherboard comes with Broadcom 3008 SAS3 IT mode controller
is it...
i have some proxmox servers installed,
but on some of them there are swap enabled and always full. (i think it might reduce overall performance)
there are lots of ram available.
is it safe to disable the swap in proxmox host? ( it have around 20 containers and 4 vms running)
i think i found the issue that cased it ( i disabled the nfs share by mistake),
now i re-enabled it but it still fails to start.
i have access to the continer raw file. can i restore it ?
i noticed one of my lxc containers was down,
and it failed to start with he following error:
/usr/bin/lxc-start -F -n 143
lxc-start: 143: conf.c: run_buffer: 352 Script exited with status 13
lxc-start: 143: start.c: lxc_init: 897 Failed to run lxc.hook.pre-start for container "143"
lxc-start...
when running python script inside the lxc all the python functions i found return the original core count of the host, and it should return the amount of recesses allocated to the lxc.
anyone got an idea ?
just looked again i have 5 clients ( that mounted the cephfs ) and are outside of proxmox
they are
Ubuntu 18.04 LTS (GNU/Linux 4.15.0-88-generic x86_64)
ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
but it is luminous.
as fast as i know this the setup we...
i got an error:
root@pve-srv3:~# ceph balancer mode upmap
Error EPERM: min_compat_client "jewel" < "luminous", which is required for pg-upmap. Try "ceph osd set-require-min-compat-client luminous" before enabling this mode
we have a working production pool,
we running out of space and until we will get more ssds (due to covid everything is slow)
what is the best approach to do it ?
i know, we in the process of ordering more,
i am still looking for best performance\value for our company..
currently there are no deals for fast sas3 drives..