Search results

  1. B

    Ceph PG #

    Thank you for you help and warning, I will keep an eye on it.
  2. B

    Ceph PG #

    Thank you , so the 1024 PGs would be preferred being the calculated value with the ceph pg calculator ? I am warming up to the autoscaler , I have it running on a smaller cluster and it is just working I guess. I am just not sure how it is making the adjustments and how they affect the...
  3. B

    Ceph PG #

    Proxmox 7.4.16. I am getting confused by all the numbers. I have 24 OSDs , SSD 1.46TB across 4 nodes, 3 replicas , total size of the pool 12TB and it is going to be 80-85% full. I did the calculation from ceph calc and it gets me 800 , rounded 1024 PG which is also the number that Ceph...
  4. B

    PBS restore command syntax

    I am playing with restore command for PBS and I cannot seem to get it right. I have local LVM storage called "local1-SSD" where I am trying to restore. proxmox-backup-client restore --repository username@pbs@1.1.1.1:Datastore vm/1000/2023-08-13T23:00:39Z drive-virtio0.img.fidx...
  5. B

    Numa not enabled ?

    ok , thank you, will install.
  6. B

    Numa not enabled ?

    I have a test node on community repo which was an upgrage from 6.4 to 7 with just one socket and numa commands work on it but NOT on any of my 6 nodes with two socked CPUs on them.
  7. B

    Numa not enabled ?

    Is there any reason when numa would not be enabled on Proxmox. I did clean install of Pve 7.3 , enterprise repo. When I try to check the numastat I get command not found, any numa command gives me that response. proxmox-ve: 7.3-1 (running kernel: 5.15.85-1-pve) pve-manager: 7.3-6 (running...
  8. B

    Maintenance on a cluster

    ok, thank you I need to jump on it now so I will do expected 1 and then remove after hours to bring total number of nodes to 2 in the remaining 6.4 cluster. Thx
  9. B

    Maintenance on a cluster

    In production environment can this be done at anytime or should I wait till after hours when the load is low ? I am on 6.4 version. Thank you
  10. B

    Maintenance on a cluster

    I am reinstalling from 6.4 to 7.3 in production so I have to do 2 max nodes at the time. I am at the 4 remining nodes in a 5 node cluster (removed 1 node already). Tomorrow I am planning to remove 2 nodes out of 4 working nodes leaving just two nodes which will give me no quorum. What is the...
  11. B

    Noticed high swap usage related to VMs running for a long time

    I am referring to swap usage on the host. Thx
  12. B

    Noticed high swap usage related to VMs running for a long time

    As stated the longer the VMs are running the higher swap usage on the host. I have swappiness set to 10 but with VMs running for 300-400 days the swap is getting full. I can see when rebooting 5-10 of them the swap would go down 10-20%. Is there particular setting that manages the user of swap...
  13. B

    Failed deactivating swap /dev/pve/swap

    Similar message here , 5 node cluster with Dell Servers on Intel CPUs, fully updated version 6 with community subscription. I had to reboot two servers for maintenance in last two month and each had this message: Failed deactivating swap /dev/pve/swap A Stop job is running for /dev/dm-8 (8...
  14. B

    HA setup and reboot due watchdog

    To answer your other questions at least as 6.4.-1 is concerned, I think that the HA is a cluster and not a node setting. You setup it up under Datacenter. Then you tell the datacenter which VMs are participating and which are not. Perhaps you setup the state of the VM to something other then...
  15. B

    HA setup and reboot due watchdog

    That has not been my experience but I am on 6.4-1 version. I have mixed VMs on a cluster participating. Majority of VMs participate in HA but small number of them do not. At least on 6.4-1 it is not a problem and you can create and run VMs not participating in HA on that node. Thx
  16. B

    SSD Drives

    Are you planning to just run Proxmox on SSDs which is not that big o of a deal ? If you are planning to run your VM from SSDs there are few things to consider. What type of VMs , what is the storage type etc. As a general rule you can run Proxmox on regular SSDs, running VMs on SSDs in...
  17. B

    CPU type host vs. kvm64

    Thank you for the link. I wonder when this changed , the default on Proxmox is still KVM64 on 7.3-4 - perhaps is it is the most compatible one. I started to use host as our CPUs in the cluster are identical - I haven't had any issues. I am curious what's the best compromise between the...
  18. B

    CPU type host vs. kvm64

    I just live migrated back and forth one of the systems across 4 nodes several times. No issues with the same CPU model. The system is stable and operational after 6 live migrations have host type CPU configured.
  19. B

    CPU type host vs. kvm64

    I will test that today. LnxBil I see that in the link VictorSTS provided and I was confused by this as well. Proxmox documentation states: "In short, if you care about live migration and moving VMs between nodes, leave the kvm64 default. If you don’t care about live migration or have a...
  20. B

    CPU type host vs. kvm64

    I have all nodes with exactly the same CPUs model , core count etc. In general is there significant increase in CPU performance in the host type vs. default kvm64 ? I have all VMs set to kvm64 but I was reading some Proxmox documentation and it says: "If you don’t care about live migration...