Search results

  1. B

    PBS restore command syntax

    I am playing with restore command for PBS and I cannot seem to get it right. I have local LVM storage called "local1-SSD" where I am trying to restore. proxmox-backup-client restore --repository username@pbs@1.1.1.1:Datastore vm/1000/2023-08-13T23:00:39Z drive-virtio0.img.fidx...
  2. B

    Numa not enabled ?

    ok , thank you, will install.
  3. B

    Numa not enabled ?

    I have a test node on community repo which was an upgrage from 6.4 to 7 with just one socket and numa commands work on it but NOT on any of my 6 nodes with two socked CPUs on them.
  4. B

    Numa not enabled ?

    Is there any reason when numa would not be enabled on Proxmox. I did clean install of Pve 7.3 , enterprise repo. When I try to check the numastat I get command not found, any numa command gives me that response. proxmox-ve: 7.3-1 (running kernel: 5.15.85-1-pve) pve-manager: 7.3-6 (running...
  5. B

    Maintenance on a cluster

    ok, thank you I need to jump on it now so I will do expected 1 and then remove after hours to bring total number of nodes to 2 in the remaining 6.4 cluster. Thx
  6. B

    Maintenance on a cluster

    In production environment can this be done at anytime or should I wait till after hours when the load is low ? I am on 6.4 version. Thank you
  7. B

    Maintenance on a cluster

    I am reinstalling from 6.4 to 7.3 in production so I have to do 2 max nodes at the time. I am at the 4 remining nodes in a 5 node cluster (removed 1 node already). Tomorrow I am planning to remove 2 nodes out of 4 working nodes leaving just two nodes which will give me no quorum. What is the...
  8. B

    Noticed high swap usage related to VMs running for a long time

    I am referring to swap usage on the host. Thx
  9. B

    Noticed high swap usage related to VMs running for a long time

    As stated the longer the VMs are running the higher swap usage on the host. I have swappiness set to 10 but with VMs running for 300-400 days the swap is getting full. I can see when rebooting 5-10 of them the swap would go down 10-20%. Is there particular setting that manages the user of swap...
  10. B

    Failed deactivating swap /dev/pve/swap

    Similar message here , 5 node cluster with Dell Servers on Intel CPUs, fully updated version 6 with community subscription. I had to reboot two servers for maintenance in last two month and each had this message: Failed deactivating swap /dev/pve/swap A Stop job is running for /dev/dm-8 (8...
  11. B

    HA setup and reboot due watchdog

    To answer your other questions at least as 6.4.-1 is concerned, I think that the HA is a cluster and not a node setting. You setup it up under Datacenter. Then you tell the datacenter which VMs are participating and which are not. Perhaps you setup the state of the VM to something other then...
  12. B

    HA setup and reboot due watchdog

    That has not been my experience but I am on 6.4-1 version. I have mixed VMs on a cluster participating. Majority of VMs participate in HA but small number of them do not. At least on 6.4-1 it is not a problem and you can create and run VMs not participating in HA on that node. Thx
  13. B

    SSD Drives

    Are you planning to just run Proxmox on SSDs which is not that big o of a deal ? If you are planning to run your VM from SSDs there are few things to consider. What type of VMs , what is the storage type etc. As a general rule you can run Proxmox on regular SSDs, running VMs on SSDs in...
  14. B

    CPU type host vs. kvm64

    Thank you for the link. I wonder when this changed , the default on Proxmox is still KVM64 on 7.3-4 - perhaps is it is the most compatible one. I started to use host as our CPUs in the cluster are identical - I haven't had any issues. I am curious what's the best compromise between the...
  15. B

    CPU type host vs. kvm64

    I just live migrated back and forth one of the systems across 4 nodes several times. No issues with the same CPU model. The system is stable and operational after 6 live migrations have host type CPU configured.
  16. B

    CPU type host vs. kvm64

    I will test that today. LnxBil I see that in the link VictorSTS provided and I was confused by this as well. Proxmox documentation states: "In short, if you care about live migration and moving VMs between nodes, leave the kvm64 default. If you don’t care about live migration or have a...
  17. B

    CPU type host vs. kvm64

    I have all nodes with exactly the same CPUs model , core count etc. In general is there significant increase in CPU performance in the host type vs. default kvm64 ? I have all VMs set to kvm64 but I was reading some Proxmox documentation and it says: "If you don’t care about live migration...
  18. B

    Getting to BIOS

    that's it , thank you
  19. B

    Getting to BIOS

    I am trying to boot to BIOS on the console to try to boot from another source like ISO and I need to do that that way for various reasons. I can not sent the brake signal "esc" as the console is delayed and boots the system. Is this still the best way of doing it: qm sendkey vmid esc perhaps...
  20. B

    Proxmox 6.1 ISO

    I need it for testing, specifically to check what is the ceph clietn version and if it is updating during updates from 6.1.x to 6.4.x on which my current cluster is. It is regarding another post...