while expanding monitoring of my PVE systems I am stumbling over a (for me) problematic detail: the reported speed of a virtual NIC is always "10", which means 10 MBit/s. Specifically the host tells me this for any VM:
root@pm1:~# cat /sys/class/net/tap110i0/speed
I appeal to your knowledge in search of the best ideas to answer the following two questions:
As a provider of VPS servers (servers to which I do not have access), what would be the best method to limit the number of emails of each customer? (KVM virtualization via Proxmox, public IP...
Is there a way to limit the speed at which PBS backups our servers ? I'm asking because it takes all our NIC bandwith which make our VM pretty much unreachable during the backup schedule.
I've tried to edit /etc/vzdump.conf but it seems to ignore it when using PBS.
As you are well aware this days all major email providers such Gmail, Outlook.com and many more are deferring if too many messages/hour/minute are being attempted to their accounts. In fact, i am seeing a lot of "451 too many messages, slow down" warnings in the deferred queue list.
Please help, is any defaults bandwidth limits for ethernet network card in PVE 5.2? In old versions of PVE there was option bwlimits in /etc/vzdump.conf. Was it replaced by datacenter.cfg ? https://pve.proxmox.com/pve-docs/datacenter.cfg.5.html
The default CT creation dialog offers us to set number of CPU cores, but not CPU limit. However that causes CT to be locked on specific cores preventing it to use other cores under load, which might result in load spikes when two or several CTs get loaded on the same core. It somewhat tries to...
can i modify the lxc behavior when the container reach the ram limit ?
Now on some containers i see that mysql is often killed:
May 8 08:07:30 nodo3 kernel: [1504328.952937] Task in /lxc/343/ns killed as a result...
I'm periodically having issues with the lxc containers crashing the host node.
The errors on the node are the classic nmi_watchdog stuck and i believe so far i was treating the symptom instead of the cause.
Today, i had a very interesting "customer". His container was using 100% of his cpu (1...
I am in the process of migrating one ProxMox node to a new box with better specs. Sadly, after a short while I found a problem with the bandwidth. Suddenly it limits incoming and outgoing bandwidth to 4-15 MB/s.
I know you may ask background stuff etc. But this is with NOTHING else running...
ich würde gerne wissen ob es Möglich ist die Anzahl der CPU cores welche für den LXC container zugewiesen worden sind, in sofern zu beschränken das der User auch nur explizit diese sieht (mittels htop z.B.). Derzeit sehe ich immer 8/8 CPU cores in jedem LXC Container also genau soviele...
It seems like bandwidth limit are not working for outgoing traffic using latest proxmox 4 from repository and latest kernel
Relevant VM config :
Bridge is openvswitch type
iperf3 trafic to limited VM is capped :
iperf3 trafic from limited VM is NOT capped :
tc rules :
My servers have an internal rack speed of 1Gbps but an external speed of 250Mbps. Each time the backups start, it =saturates the network card and most incoming connections to the server drop (websites unavailable, etc).
Is there a possibility, other then setting up a separate backup...