Search results

  1. J

    Host reboot (stopall) causes some VMs on other hosts to reboot

    Hi, * I did not call the stopall, the server's normal restart procedure did. Task log: "Stop all VMs and Containers" * Yes, I am using HA features.
  2. J

    Host reboot (stopall) causes some VMs on other hosts to reboot

    Hello, I've been having a weird issue. My configuration is a 7-node cluster. Yesterday I was doing a scheduled update on the hosts, and live migrating VMs to other servers. When I rebooted a node, the "stopall" command caused some VMs that I had moved to other hosts to reboot. Is this a known...
  3. J

    Strange Ceph behavior

    Hello Gudkoff, Just answering based off of weird stuff I've noticed myself. Is there any chance you're running backup tasks or whatnot at that time and how does your network topology look like? Electing/probing states sometimes have to do with the actual network. Furthermore, have you seen...
  4. J

    Minimum VMID increase to 1000?

    I just found this in the forum which answers my question I guess: https://forum.proxmox.com/threads/changing-automatic-vmid-assignment-range.12161/ It would be a very nice feature to include this in the admin area, especially for people that run many clusters, so we can set 100,1000,2000 etc...
  5. J

    Minimum VMID increase to 1000?

    Hello, I would like to increase the minimum VMID to 1000, while allowing me to use manually use lower IDs. Is this possible? Thanks, John
  6. J

    Deactivate cephx auth

    To extend, my thought is: # ceph osd set noout # ceph osd set norecover # ceph osd set norebalance # ceph osd set nobackfill # ceph osd set nodown # ceph osd set pause Then disable cephx by: [global] auth client required = none auth cluster required = none auth...
  7. J

    Deactivate cephx auth

    Hello, As the title suggest, I would like to deactivate Cephx authentication, as I want to remove the overhead. My ceph networks are completely isolated, so no security implication. I've already done it on my dev cluster and it works. My question is: How should I go about disabling it on the...
  8. J

    Ceph: 2 pools with different SSD sets?

    Hello, I'm thinking of making a cluster with 2 different SSD sets: One pool for read-heavy with high space TLC SSDs and one for mixed, using more expensive MLC SSDs. Is this possible with Ceph?
  9. J

    Ceph member without OSDs?

    In that case, it will be automatically added to the node when I add it to the cluster, since storage.cfg is on pvefs.
  10. J

    Ceph member without OSDs?

    So just run pveceph init and I'm done, correct?
  11. J

    Ceph member without OSDs?

    Hello, Would it be possible to add a node to my cluster and let it access Ceph without itself having any OSDs? I'm thinking of adding disks later. Thanks!
  12. J

    Ceph and trim/discard

    Debian 9.13 Kernel: 4.9.0-13-amd64 #1 SMP Debian 4.9.228-1 (2020-07-05)
  13. J

    Ceph and trim/discard

    All VMs as time passes. Newer VMs have nearly identical rbd du with df -h. Older VMs are reaching their max disk size in rbd du. fstrim -va shows that it trims sucessfully.
  14. J

    Ceph and trim/discard

    SSD+Discard option set in Proxmox GUI and virtio-scsi disks are used as recommended. No snapshots. That's why it's driving me crazy!
  15. J

    Ceph and trim/discard

    Hello there, I have found many VMs (mostly older) on my cluster (PVE 5.4, Ceph luminous) that will not free space up on RBD even after trimming. For example, there is a VM with 400GB allocated space and rbd du shows 399 whereas df -h inside the VM shows 200GB used. trim runs successfully, I've...
  16. J

    Ceph: Increasing pg/pgp_num from 1024->2048

    Just an update, went from 1024->2048 in 128 increments. Everything went smoothly, thank you @RokaKen ! :-)
  17. J

    Ceph: Increasing pg/pgp_num from 1024->2048

    Thank you for your link @RokaKen . I'm running with 2x10G LAG and SSD disks (not NVMe), do you think that going up in 128 increments and with backfill set to '1' be OK for my users, as I want minimal impact on performance? Of course it's going to run in the least busy time of day.
  18. J

    Ceph: Increasing pg/pgp_num from 1024->2048

    Hello @Alwin , Luminous has no autoscaler AFAIK. I have osd_max_backfills set to '1', so I'm thinking of just increasing pg_num , let it run slowly (for hours probably) and then pgp_num. What do you think?