Search results

  1. VNC argument/port question

    I have the VNC working with args: -vnc added to the config file. What happens if the VM is migrated to another node with another VM using the same port ? Is there a wider range of ports that can be used instead of 59xx or perhaps extend that range so I can add it to more VMs from...
  2. Strange network behavior on one port in a bond

    Is there anything else that can be checked ? I cannot do iperf for now. I was able to apply changes to a description of an interface which restarted the network on node1. Now the traffic outbound is using the enp4s0f0 and the enp5s0f0 is not passing any traffic out (outbound) , both interfaces...
  3. Strange network behavior on one port in a bond

    Please note , the traffic TO node1 from the switch (Incoming) is fine and working across both interfaces enp4s0f0 and enp5s0f0. The problem is with sending from node1 (Outgoing per nload) , only enp5s0f0 is sending unless it is unplugged and then enp4s0f0 takes over otherwise the enp4s0f0 is...
  4. Strange network behavior on one port in a bond

    Cannot test at the moment as the node is in production with 4 node cluster having about 200 VMs on Ceph storage. Don't really have a maintenance window scheduled for now. It is working on the second slave interface. I looked at the /proc/net/bonding/bond1 across the 4 nodes and they look the...
  5. Strange network behavior on one port in a bond

    Hi, I am testing by uploading the VMs to the storage and generating traffic that way which is when I took the reading from nload, I can see clearly that the presence on all other ports and or servers for outgoing traffic and absence of that traffic on that particular interface in outgoing...
  6. Strange network behavior on one port in a bond

    proxmox-ve: 7.4-1 ceph: 17.2.6-pve1 I have 4 node Ceph cluster using Proxmox as a host. Public and Private networks are separated using 2 x 10Gbps ports each (2 cards per node , 4 ports total). All nodes are setup in exactly the same way. Here is an example of Ceph Private config: auto...
  7. Ceph PG #

    Thank you for you help and warning, I will keep an eye on it.
  8. Ceph PG #

    Thank you , so the 1024 PGs would be preferred being the calculated value with the ceph pg calculator ? I am warming up to the autoscaler , I have it running on a smaller cluster and it is just working I guess. I am just not sure how it is making the adjustments and how they affect the...
  9. Ceph PG #

    Proxmox 7.4.16. I am getting confused by all the numbers. I have 24 OSDs , SSD 1.46TB across 4 nodes, 3 replicas , total size of the pool 12TB and it is going to be 80-85% full. I did the calculation from ceph calc and it gets me 800 , rounded 1024 PG which is also the number that Ceph...
  10. PBS restore command syntax

    I am playing with restore command for PBS and I cannot seem to get it right. I have local LVM storage called "local1-SSD" where I am trying to restore. proxmox-backup-client restore --repository username@pbs@ vm/1000/2023-08-13T23:00:39Z drive-virtio0.img.fidx...
  11. Numa not enabled ?

    ok , thank you, will install.
  12. Numa not enabled ?

    I have a test node on community repo which was an upgrage from 6.4 to 7 with just one socket and numa commands work on it but NOT on any of my 6 nodes with two socked CPUs on them.
  13. Numa not enabled ?

    Is there any reason when numa would not be enabled on Proxmox. I did clean install of Pve 7.3 , enterprise repo. When I try to check the numastat I get command not found, any numa command gives me that response. proxmox-ve: 7.3-1 (running kernel: 5.15.85-1-pve) pve-manager: 7.3-6 (running...
  14. Maintenance on a cluster

    ok, thank you I need to jump on it now so I will do expected 1 and then remove after hours to bring total number of nodes to 2 in the remaining 6.4 cluster. Thx
  15. Maintenance on a cluster

    In production environment can this be done at anytime or should I wait till after hours when the load is low ? I am on 6.4 version. Thank you
  16. Maintenance on a cluster

    I am reinstalling from 6.4 to 7.3 in production so I have to do 2 max nodes at the time. I am at the 4 remining nodes in a 5 node cluster (removed 1 node already). Tomorrow I am planning to remove 2 nodes out of 4 working nodes leaving just two nodes which will give me no quorum. What is the...
  17. Noticed high swap usage related to VMs running for a long time

    I am referring to swap usage on the host. Thx
  18. Noticed high swap usage related to VMs running for a long time

    As stated the longer the VMs are running the higher swap usage on the host. I have swappiness set to 10 but with VMs running for 300-400 days the swap is getting full. I can see when rebooting 5-10 of them the swap would go down 10-20%. Is there particular setting that manages the user of swap...
  19. Failed deactivating swap /dev/pve/swap

    Similar message here , 5 node cluster with Dell Servers on Intel CPUs, fully updated version 6 with community subscription. I had to reboot two servers for maintenance in last two month and each had this message: Failed deactivating swap /dev/pve/swap A Stop job is running for /dev/dm-8 (8...
  20. HA setup and reboot due watchdog

    To answer your other questions at least as 6.4.-1 is concerned, I think that the HA is a cluster and not a node setting. You setup it up under Datacenter. Then you tell the datacenter which VMs are participating and which are not. Perhaps you setup the state of the VM to something other then...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!