i think i found the issue that cased it ( i disabled the nfs share by mistake),
now i re-enabled it but it still fails to start.
i have access to the continer raw file. can i restore it ?
i noticed one of my lxc containers was down,
and it failed to start with he following error:
/usr/bin/lxc-start -F -n 143
lxc-start: 143: conf.c: run_buffer: 352 Script exited with status 13
lxc-start: 143: start.c: lxc_init: 897 Failed to run lxc.hook.pre-start for container "143"
lxc-start...
when running python script inside the lxc all the python functions i found return the original core count of the host, and it should return the amount of recesses allocated to the lxc.
anyone got an idea ?
just looked again i have 5 clients ( that mounted the cephfs ) and are outside of proxmox
they are
Ubuntu 18.04 LTS (GNU/Linux 4.15.0-88-generic x86_64)
ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
but it is luminous.
as fast as i know this the setup we...
i got an error:
root@pve-srv3:~# ceph balancer mode upmap
Error EPERM: min_compat_client "jewel" < "luminous", which is required for pg-upmap. Try "ceph osd set-require-min-compat-client luminous" before enabling this mode
we have a working production pool,
we running out of space and until we will get more ssds (due to covid everything is slow)
what is the best approach to do it ?
i know, we in the process of ordering more,
i am still looking for best performance\value for our company..
currently there are no deals for fast sas3 drives..
i see, thanks,, now i know what to look for.
thanks.
btw i have another question (https://forum.proxmox.com/threads/ceph-rebalance-osd.68168/) hopefully you can take a a look