I run a proxmox cluseter. Both my data storage and 'machine' storage fits on Ceph. To avoid issues with HA and bind mounts I use ceph-fuse on LXC to access data.
As I use the LXC/fuse combination as i understand it there is no other option than to stop in order to do the backup?
Before I...
Hi,
I have just upgraded my first node from 16.2.13 to Quincy. After the update/reboot node has returned but the ceph services have not come back. i.e. i can see the ceoh mounts but server and OSD are showing as out. When you go to the server it looks like the first time you go to install...
Over the past few days .. feels like years I have been trying to get my containers working correctly with cephfs. Google really has not been my friend during this process because there is a lot of old stuff out there and it doesn't seem to be a popular area (?)
For the past year or so I have...
Hi.
Something weird .. had a stuck back up of this CT not sure if its related or not. Can't start this container. Starting would kick in HA. Have no removed from HA resources but still no joy. A bunch of other threads mention binutils but i have checked and that is all installed...
Long story short I need to reduce the size of a couple of containers. Was planning on using resize2fs and lvreduce to accomplish this. However my containers are sitting on ceph therefore i don't have a 'path' to use for those utilities?
My only thought was to move it to local storage -...
I have a simple homelab four node cluster. Running ceph for VM and LXC storage. I have two seperate pools at the moment as I moving stuff around. One pool is HD based and the other SSD.
On the same node from HDD to SSD poolstakes massively different times. VM 100gb drive shifted over in...
I have a four node proxmox cluster for my homelab, tied up with a 10gb network! Essentailly 2 nodes are compute and 2 are storage focsused. All four nodes have docker and VMs but the load is focused on the 2 compute nodes. Using EXOS hd for storage, SSDs for VMs/LXCs. and seperate boot SSDs...
I have a 5 node cluster which has been working fine but seems to have gone crazy a day or two after i did an update (pve-manager/7.2-4/ca9d43cc (running kernel: 5.15.35-1-pve)).
After a while three (the same three) of the hosts go grey but are still up. Run the following commands and it comes...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.