Recent content by John.N

  1. J

    Host reboot (stopall) causes some VMs on other hosts to reboot

    Hi, * I did not call the stopall, the server's normal restart procedure did. Task log: "Stop all VMs and Containers" * Yes, I am using HA features.
  2. J

    Host reboot (stopall) causes some VMs on other hosts to reboot

    Hello, I've been having a weird issue. My configuration is a 7-node cluster. Yesterday I was doing a scheduled update on the hosts, and live migrating VMs to other servers. When I rebooted a node, the "stopall" command caused some VMs that I had moved to other hosts to reboot. Is this a known...
  3. J

    Strange Ceph behavior

    Hello Gudkoff, Just answering based off of weird stuff I've noticed myself. Is there any chance you're running backup tasks or whatnot at that time and how does your network topology look like? Electing/probing states sometimes have to do with the actual network. Furthermore, have you seen...
  4. J

    Minimum VMID increase to 1000?

    I just found this in the forum which answers my question I guess: https://forum.proxmox.com/threads/changing-automatic-vmid-assignment-range.12161/ It would be a very nice feature to include this in the admin area, especially for people that run many clusters, so we can set 100,1000,2000 etc...
  5. J

    Minimum VMID increase to 1000?

    Hello, I would like to increase the minimum VMID to 1000, while allowing me to use manually use lower IDs. Is this possible? Thanks, John
  6. J

    Deactivate cephx auth

    To extend, my thought is: # ceph osd set noout # ceph osd set norecover # ceph osd set norebalance # ceph osd set nobackfill # ceph osd set nodown # ceph osd set pause Then disable cephx by: [global] auth client required = none auth cluster required = none auth...
  7. J

    Deactivate cephx auth

    Hello, As the title suggest, I would like to deactivate Cephx authentication, as I want to remove the overhead. My ceph networks are completely isolated, so no security implication. I've already done it on my dev cluster and it works. My question is: How should I go about disabling it on the...
  8. J

    Ceph: 2 pools with different SSD sets?

    Hello, I'm thinking of making a cluster with 2 different SSD sets: One pool for read-heavy with high space TLC SSDs and one for mixed, using more expensive MLC SSDs. Is this possible with Ceph?
  9. J

    Ceph member without OSDs?

    In that case, it will be automatically added to the node when I add it to the cluster, since storage.cfg is on pvefs.
  10. J

    Ceph member without OSDs?

    So just run pveceph init and I'm done, correct?
  11. J

    Ceph member without OSDs?

    Hello, Would it be possible to add a node to my cluster and let it access Ceph without itself having any OSDs? I'm thinking of adding disks later. Thanks!
  12. J

    Ceph and trim/discard

    Debian 9.13 Kernel: 4.9.0-13-amd64 #1 SMP Debian 4.9.228-1 (2020-07-05)
  13. J

    Ceph and trim/discard

    All VMs as time passes. Newer VMs have nearly identical rbd du with df -h. Older VMs are reaching their max disk size in rbd du. fstrim -va shows that it trims sucessfully.