Well I have not upgraded the filesystem to nautilus yet, as far as I understand it, that would require manually changing the package repositories for ceph.
I didn't want to do that yet until absolutely necessary.
The /mnt/pve/cephfs directory exists, and new subdirs of /mnt/pve/ are created if I try to create new cephfs filesystems in the gui.
journal shows these errors:
mount error 22 = Invalid argument
kernel: libceph: bad option at 'conf=/etc/pve/ceph.conf'
I noticed that two version of libceph...
Well it was a bad combination of circumstances. As the router is a virtual appliance, which was dependent on functioning cephfs, (I have removed this dependency) the only access I had was through a KVM device provided by the data centre. Their KVM device uses a Java applet, and that appears to...
Hi there,
After the last package update, on a cluster with version 6.1-7 vms failed to come back up due to missing cephfs dependencies. Ceph is reports no problems and health status is green.
However I get:
mount error: See "systemctl status mnt-pve-cephfs.mount" and "journalctl -xe" for...
I have a config issue with my ceph cluster and can only access via remote KVM. it's impossible to log in because the console output from ceph
"libceph: bad option at conf=/etc/pve/ceph.conf"
kicks in before the password can be typed, and somehow messes up the terminal input.
Is there any...
I'd like to add a shutdown hook so to speak, in systemd to automate the migration of a guest upon intentional reboot of a host. The guest is a router and firewall so it must have low downtime and I like to keep my security patches up to date. My goal is simply saving time, instead of manually...
Thanks for the reponse. I have 2x 1GbE ports per server node currently, and an IPMI port also.
I would be intending to have the cluster on the private network, and expose only via pfSense. I have no experience with cisco rack switches, so I don't know what would be involved to set that up for...
I'm hoping this will help some others out there, but also would like to assess the viability of this solution.
I will get a /29 ip address block with 5 useable IP addresses from the colocation data centre where it will be hosted.
I should be able to pre-configure off site the pfsense...
Yes in the end it seems with this card you have to use hardware RAID and it will then expose the arrays as virtual drives. It's not the ideal controller for CEPH, which is better, (faster recovery from failures than RAID) when managing the disks directly.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.