I run a proxmox cluseter. Both my data storage and 'machine' storage fits on Ceph. To avoid issues with HA and bind mounts I use ceph-fuse on LXC to access data.
As I use the LXC/fuse combination as i understand it there is no other option than to stop in order to do the backup?
Before I...
yes noticed that as i posted .. going 'snow blind'
updated but still the same 'install prompt' for ceph
root@pve03:~# apt upgrade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following packages were...
This is on another node which I have tried to update ceph on ..
root@pve01:~# apt update
Hit:1 http://ftp.uk.debian.org/debian bullseye InRelease
Get:2 http://ftp.uk.debian.org/debian bullseye-updates InRelease [44.1 kB]
Hit:3 http://security.debian.org bullseye-security InRelease...
Hi,
I have just upgraded my first node from 16.2.13 to Quincy. After the update/reboot node has returned but the ceph services have not come back. i.e. i can see the ceoh mounts but server and OSD are showing as out. When you go to the server it looks like the first time you go to install...
Over the past few days .. feels like years I have been trying to get my containers working correctly with cephfs. Google really has not been my friend during this process because there is a lot of old stuff out there and it doesn't seem to be a popular area (?)
For the past year or so I have...
VLAN are as the name suggests a virtual network, you use it to segregate traffic away from each other on the LAN either for performance or security. You create as many as you need on a NIC, its not a 1-2-1 relationship.
If you have three adapters available you could bond them together and then...
Hi.
Something weird .. had a stuck back up of this CT not sure if its related or not. Can't start this container. Starting would kick in HA. Have no removed from HA resources but still no joy. A bunch of other threads mention binutils but i have checked and that is all installed...
Long story short I need to reduce the size of a couple of containers. Was planning on using resize2fs and lvreduce to accomplish this. However my containers are sitting on ceph therefore i don't have a 'path' to use for those utilities?
My only thought was to move it to local storage -...
I have a simple homelab four node cluster. Running ceph for VM and LXC storage. I have two seperate pools at the moment as I moving stuff around. One pool is HD based and the other SSD.
On the same node from HDD to SSD poolstakes massively different times. VM 100gb drive shifted over in...
I have a four node proxmox cluster for my homelab, tied up with a 10gb network! Essentailly 2 nodes are compute and 2 are storage focsused. All four nodes have docker and VMs but the load is focused on the 2 compute nodes. Using EXOS hd for storage, SSDs for VMs/LXCs. and seperate boot SSDs...
Well that makes zip sense to me .. server1 has gone green! all OSD are up and in!
Nothing! I have a docker on there and a couple of desktop VMs. server1 was the only one with the service enabled!
think network issue is sorted on server1! I don't think it was the correct way to achive gettting rid of the additional network stuff but diabling the DHCP service ( systemctl disable dhcpcd.service ) stops it picking up an address and now can ping on both public and ceph networks.
after a...
More head scratching ;) .. so have got the ceph network back on the server1 can now ping all the other 10.107 hosts.
However some wirld stuff. So the physical obboard ethernet which isn't used had the 192.168.107.55 address that was meantioned in the heatbeat failure.
server1 and server 2...
HI,
I haven't enabled the proxmox f/w on any of the hosts. Interestingly all the hosts can ping the public address (whch is the backup network for ceph) but one node can't be reached on the primary ceph link (10.107.x.x)
[global]
auth_client_required = cephx...
I have a 5 node cluster which has been working fine but seems to have gone crazy a day or two after i did an update (pve-manager/7.2-4/ca9d43cc (running kernel: 5.15.35-1-pve)).
After a while three (the same three) of the hosts go grey but are still up. Run the following commands and it comes...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.