Search results

  1. L

    [SOLVED] Ceph, systemd and Killmode=none

    Just checking what the recommendation is in Jan '24 .. do nothing or update ceph-volume@.service? If its update I am assuming its killMode=mixed
  2. L

    Cannot execute a LXC backup with stop mode on a HA managed and enabled Service

    I run a proxmox cluseter. Both my data storage and 'machine' storage fits on Ceph. To avoid issues with HA and bind mounts I use ceph-fuse on LXC to access data. As I use the LXC/fuse combination as i understand it there is no other option than to stop in order to do the backup? Before I...
  3. L

    Ceph Upgrade To Quincy .. "Ceph is not installed on this node"

    I went forward .. all nodes updated all looks good. Thanks for your support
  4. L

    Ceph Upgrade To Quincy .. "Ceph is not installed on this node"

    Are you saying I have to go back to pacific before trying to move to quincy again?
  5. L

    Ceph Upgrade To Quincy .. "Ceph is not installed on this node"

    yes noticed that as i posted .. going 'snow blind' updated but still the same 'install prompt' for ceph root@pve03:~# apt upgrade Reading package lists... Done Building dependency tree... Done Reading state information... Done Calculating upgrade... Done The following packages were...
  6. L

    Ceph Upgrade To Quincy .. "Ceph is not installed on this node"

    This is on another node which I have tried to update ceph on .. root@pve01:~# apt update Hit:1 http://ftp.uk.debian.org/debian bullseye InRelease Get:2 http://ftp.uk.debian.org/debian bullseye-updates InRelease [44.1 kB] Hit:3 http://security.debian.org bullseye-security InRelease...
  7. L

    Ceph Upgrade To Quincy .. "Ceph is not installed on this node"

    proxmox-ve: 7.4-1 (running kernel: 5.15.108-1-pve) pve-manager: 7.4-15 (running version: 7.4-15/a5d2a31e) pve-kernel-5.15: 7.4-4 pve-kernel-5.13: 7.1-9 pve-kernel-5.15.108-1-pve: 5.15.108-1 pve-kernel-5.15.107-2-pve: 5.15.107-2 pve-kernel-5.13.19-6-pve: 5.13.19-15 pve-kernel-5.13.19-2-pve...
  8. L

    Ceph Upgrade To Quincy .. "Ceph is not installed on this node"

    Hi, I have just upgraded my first node from 16.2.13 to Quincy. After the update/reboot node has returned but the ceph services have not come back. i.e. i can see the ceoh mounts but server and OSD are showing as out. When you go to the server it looks like the first time you go to install...
  9. L

    cephfs and LXC .. a voyage of discovery

    Over the past few days .. feels like years I have been trying to get my containers working correctly with cephfs. Google really has not been my friend during this process because there is a lot of old stuff out there and it doesn't seem to be a popular area (?) For the past year or so I have...
  10. L

    Running pfSense as VM - Changes to Network Interfaces in Proxmox to Support vLAN Setup

    VLAN are as the name suggests a virtual network, you use it to segregate traffic away from each other on the LAN either for performance or security. You create as many as you need on a NIC, its not a 1-2-1 relationship. If you have three adapters available you could bond them together and then...
  11. L

    Can't Start Container - run_buffer: 321 Script exited with status 32

    Hi. Something weird .. had a stuck back up of this CT not sure if its related or not. Can't start this container. Starting would kick in HA. Have no removed from HA resources but still no joy. A bunch of other threads mention binutils but i have checked and that is all installed...
  12. L

    Path For LXC On Ceph

    Long story short I need to reduce the size of a couple of containers. Was planning on using resize2fs and lvreduce to accomplish this. However my containers are sitting on ceph therefore i don't have a 'path' to use for those utilities? My only thought was to move it to local storage -...
  13. L

    Moving Performance VM vs LXC

    I have a simple homelab four node cluster. Running ceph for VM and LXC storage. I have two seperate pools at the moment as I moving stuff around. One pool is HD based and the other SSD. On the same node from HDD to SSD poolstakes massively different times. VM 100gb drive shifted over in...
  14. L

    Homelab Real World Ceph Advice For VM/LXC usage

    I have a four node proxmox cluster for my homelab, tied up with a 10gb network! Essentailly 2 nodes are compute and 2 are storage focsused. All four nodes have docker and VMs but the load is focused on the 2 compute nodes. Using EXOS hd for storage, SSDs for VMs/LXCs. and seperate boot SSDs...
  15. L

    Unknown status hosts .. unstable ceph .. help!

    been running fine for over 24hrs so this feels resolved to me! Thanks @aaron for holding my hand :)
  16. L

    Unknown status hosts .. unstable ceph .. help!

    Well that makes zip sense to me .. server1 has gone green! all OSD are up and in! Nothing! I have a docker on there and a couple of desktop VMs. server1 was the only one with the service enabled!
  17. L

    Unknown status hosts .. unstable ceph .. help!

    think network issue is sorted on server1! I don't think it was the correct way to achive gettting rid of the additional network stuff but diabling the DHCP service ( systemctl disable dhcpcd.service ) stops it picking up an address and now can ping on both public and ceph networks. after a...
  18. L

    Unknown status hosts .. unstable ceph .. help!

    More head scratching ;) .. so have got the ceph network back on the server1 can now ping all the other 10.107 hosts. However some wirld stuff. So the physical obboard ethernet which isn't used had the 192.168.107.55 address that was meantioned in the heatbeat failure. server1 and server 2...
  19. L

    Unknown status hosts .. unstable ceph .. help!

    HI, I haven't enabled the proxmox f/w on any of the hosts. Interestingly all the hosts can ping the public address (whch is the backup network for ceph) but one node can't be reached on the primary ceph link (10.107.x.x) [global] auth_client_required = cephx...
  20. L

    Unknown status hosts .. unstable ceph .. help!

    I have a 5 node cluster which has been working fine but seems to have gone crazy a day or two after i did an update (pve-manager/7.2-4/ca9d43cc (running kernel: 5.15.35-1-pve)). After a while three (the same three) of the hosts go grey but are still up. Run the following commands and it comes...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!