Search results

  1. J

    Host reboot (stopall) causes some VMs on other hosts to reboot

    Hello, I've been having a weird issue. My configuration is a 7-node cluster. Yesterday I was doing a scheduled update on the hosts, and live migrating VMs to other servers. When I rebooted a node, the "stopall" command caused some VMs that I had moved to other hosts to reboot. Is this a known...
  2. J

    Minimum VMID increase to 1000?

    Hello, I would like to increase the minimum VMID to 1000, while allowing me to use manually use lower IDs. Is this possible? Thanks, John
  3. J

    Deactivate cephx auth

    Hello, As the title suggest, I would like to deactivate Cephx authentication, as I want to remove the overhead. My ceph networks are completely isolated, so no security implication. I've already done it on my dev cluster and it works. My question is: How should I go about disabling it on the...
  4. J

    Ceph: 2 pools with different SSD sets?

    Hello, I'm thinking of making a cluster with 2 different SSD sets: One pool for read-heavy with high space TLC SSDs and one for mixed, using more expensive MLC SSDs. Is this possible with Ceph?
  5. J

    Ceph member without OSDs?

    Hello, Would it be possible to add a node to my cluster and let it access Ceph without itself having any OSDs? I'm thinking of adding disks later. Thanks!
  6. J

    Ceph and trim/discard

    Hello there, I have found many VMs (mostly older) on my cluster (PVE 5.4, Ceph luminous) that will not free space up on RBD even after trimming. For example, there is a VM with 400GB allocated space and rbd du shows 399 whereas df -h inside the VM shows 200GB used. trim runs successfully, I've...
  7. J

    Ceph: Increasing pg/pgp_num from 1024->2048

    Hello, I recently passed 41 OSDs on our cluster, and the PG calc suggests we should be at 2048 pgs now. Also, I'm going to be adding 5 OSDs soon too. I've read some people saying that it's best to increase pg_num in increments of 256. Has anyone got experience in increasing from 1024 to 2048...
  8. J

    Expanding a Proxmox/Ceph cluster. More nodes, more networking.

    Hello everyone, I've recently came to manage a Proxmox/Ceph cluster built a year ago and I've been assigned the task of expanding it. Currently, the cluster is made up of 3 nodes with 2/1 replication, each running with 4 480GB SSD OSDs (for a total of 12) Bluestore. They have 10+1G networking...