Here's what I got back from cPanel when I asked them about it:
I didn't bother pushing more past that point. If you feel like contacting them and pushing, maybe open a feature request and link it here? I'm sure others will come...
Yes, I already know that. It doesn't change a thing. You can easily test by restarting a VM, running 1-5Mbit traffic for an hour, then compare the average/maximum speeds listed in the VM summary to the netin/netout values. They don't match, no matter how you do the math or compare plotting...
The easiest way to see the difference is going to the Datacenter >> Node: Search view, and enabling netin/out columns. Those values are correct. However, when I go to Datacenter >> Node >> Container >> Summary, or Datacenter >> Node >> Summary, the "Network Traffic" section doesn't show the...
@Alwin I would also like to see the same option. Proxmox 6 also seems to have messed up the bandwidth graphs, and they're no longer accurate. This wasn't the case in 5.5 before I migrated to 6 - pve-manager/6.0-11/2140ef37 (running kernel: 5.0.21-4-pve). What's accurate is the node search list...
cPanel confirmed that they're using sysinfo(2) via Perl to get the loads, and lxc would need the kernel patch discussed here https://github.com/lxc/lxcfs/issues/202
Confirmed still an issue on cPanel 72 through 84. While 'uptime', service httpd fullstatus and /proc/loadavg all report the LXC load, the WHM "load average" seen in the top right header still reflects the VPS host node value. I opened a ticket with them to see if they can change the method in...
Is this still an issue on Proxmox 6, and the cPanel "Process Manager" showing the host load, while 'uptime' and top/htop show the lxc container load? I'm in the process of onboarding a new client and would rather put them on a cPanel lxc container vs kvm. Thanks
Was it nvme_core.default_ps_max_latency_us=1500 or nvme_core.default_ps_max_latency_us=5500? I'm using 2x1.2TB NVME SSD's (INTEL SSDPE2MX012T7), but the details I found on disabling the unsupported lowest power saving state refers to using 5500 vs 1500 for the value.
Hello,
I recently upgraded from Proxmox 5 to 6, and noticed that the "Network traffic" section is not logging as accurately as before. I monitored the usage for an hour using vnstat, then took screenshots of the average and maximum graphs for that hour from the GUI. As you can see, the stats...
Did something change in Proxmox 5? I can't get it to mount in a LXC container using the steps from above. I keep seeing:
[194697.116353] audit: type=1400 audit(1544903181.369:142): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-105_</var/lib/lxc>"...
Thanks for the update! Just wanted to report no issues here:
Base Board Information
Manufacturer: Intel Corporation
Product Name: S1200SP
Version: H57532-210
[root@spice ~]# uname -a
Linux [redacted] 4.4.98-3-pve #1 SMP PVE 4.4.98-102 (Sun, 7 Jan 2018 13:15:19 +0100) x86_64...
Hi Wolfgang,
Thanks for the insight! Would other options such as VNC, or start/stop/stats still be available without issues in the web ui? Am I safe to assume the end result in /etc/pve/qemu-server/<kvmid>.conf would be the following considering the scsi ID isn't used and that's the correct...
Hello,
I'm looking to setup a new Proxmox 4.4 node using 2x480GB SSD + 2x2TB SAS, and each pair will be a hardware RAID1 array (LSI MegaRaid). The host system and some small LXC containers will reside on the SSD array, however I would like to give a KVM "direct" access to the 2TB RAID1 (KVM...
I'm a Systems Architect, and just so we're clear, the chances of losing data by disabling write barriers are close to none. You can forcefully terminate a KVM/LXC/what-have-you upmpteen hundred times before you'll see a barrier related data loss. Personally, I haven't come across one that wasn't...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.