Well I have not upgraded the filesystem to nautilus yet, as far as I understand it, that would require manually changing the package repositories for ceph.
I didn't want to do that yet until absolutely necessary.
The /mnt/pve/cephfs directory exists, and new subdirs of /mnt/pve/ are created if I try to create new cephfs filesystems in the gui.
journal shows these errors:
mount error 22 = Invalid argument
kernel: libceph: bad option at 'conf=/etc/pve/ceph.conf'
I noticed that two version of libceph...
Well it was a bad combination of circumstances. As the router is a virtual appliance, which was dependent on functioning cephfs, (I have removed this dependency) the only access I had was through a KVM device provided by the data centre. Their KVM device uses a Java applet, and that appears to...
Hi there,
After the last package update, on a cluster with version 6.1-7 vms failed to come back up due to missing cephfs dependencies. Ceph is reports no problems and health status is green.
However I get:
mount error: See "systemctl status mnt-pve-cephfs.mount" and "journalctl -xe" for...
I have a config issue with my ceph cluster and can only access via remote KVM. it's impossible to log in because the console output from ceph
"libceph: bad option at conf=/etc/pve/ceph.conf"
kicks in before the password can be typed, and somehow messes up the terminal input.
Is there any...
I'd like to add a shutdown hook so to speak, in systemd to automate the migration of a guest upon intentional reboot of a host. The guest is a router and firewall so it must have low downtime and I like to keep my security patches up to date. My goal is simply saving time, instead of manually...
Thanks for the reponse. I have 2x 1GbE ports per server node currently, and an IPMI port also.
I would be intending to have the cluster on the private network, and expose only via pfSense. I have no experience with cisco rack switches, so I don't know what would be involved to set that up for...
I'm hoping this will help some others out there, but also would like to assess the viability of this solution.
I will get a /29 ip address block with 5 useable IP addresses from the colocation data centre where it will be hosted.
I should be able to pre-configure off site the pfsense...
Yes in the end it seems with this card you have to use hardware RAID and it will then expose the arrays as virtual drives. It's not the ideal controller for CEPH, which is better, (faster recovery from failures than RAID) when managing the disks directly.
Thank you for the advice. Yes PVE as IaaS is the structure I'm going for, for VM level isolation of clusters and high availability of kubernetes nodes.
I was seeing ZFS as just a reliable base layer for local storage on the hardware nodes, that would not be part of distributed storage. I was...
Thank you for the reply. I guess that means setting the card BIOS to present the disks as RAID rather than AHCI mode. I have read around though that CEPH and ZFS both prefer to have raw device access.
I'm trying to get proxmox installed on a Dell c6100 with SAS and SSD drives. The drives are all detected in BIOS but it seems that there is no driver support for this particular card and so no drives show up to the installer.
Debian installer (stretch) also does not detect them even when...
As I need ZFS booting capability from the outset I went ahead and managed to PXE boot it with the debian PXE boot guide and used this pve-iso-2-pxe script in github (sorry no link, new user) to generate the kernel and init.d worked with 5.3.2
Hi all,
Been playing with proxmox installs via pxe booting, and setting up new hardware which is a dell c6100 with each node having a single SSD + 4 SAS spinning rust each.
It's a budget setup with standard dual port 1Gb ethernet cards. I'm curious as to the best setup here, given that I can't...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.