Weird Traffic Counting

sahostking

Renowned Member
Have a a few VPS Servers that I see something strange occuring on.

Traffic is counting like this:

Untitled.png

Although its not really using that traffic if we check the switch.

Also if I check the inside on the countainer looks like it may be some sort of "internal" traffic being counted as outbound traffic as per this:

Code:
  862 root      20   0   58716    812    368 S  99.0  0.1   1920:43 sleep 1
    1 root      20   0   15496   1648   1552 S   0.0  0.2   0:10.44 init [2]
  692 postfix   20   0   38240   3632   3096 S   0.0  0.3   0:00.00 pickup -l -t unix -u -c
  754 root      20   0   37084   2416   2204 S   0.0  0.2   0:00.15 /sbin/rpcbind -w
  864 root      20   0   80032   5652   4808 S   0.0  0.5   0:00.00 sshd: root [priv]
  865 sshd      20   0   55184   3096   2416 S   0.0  0.3   0:00.00 sshd: root [net]
  871 root      20   0    1468   1036    288 S   0.0  0.1   0:00.00 ifconfig eth0
  874 root      20   0    1468   1040    288 S   0.0  0.1   0:00.00 uptime
  877 root      20   0    1468   1036    288 S   0.0  0.1   0:00.00 cat resolv.conf
  882 root      20   0    1468   1044    288 S   0.0  0.1   0:00.00 sh
  883 root      20   0    1468   1040    288 S   0.0  0.1   0:00.00 bash
  886 root      20   0   20288   3248   2736 S   0.0  0.3   0:00.00 /bin/bash
  893 root      20   0    1468   1040    288 S   0.0  0.1   0:00.00 grep "A"
  896 root      20   0  184944   1856   1588 S   0.0  0.2   0:00.37 /usr/sbin/rsyslogd
  897 root      20   0    1468   1040    288 S   0.0  0.1   0:00.00 gnome-terminal
  900 root      20   0    1468   1036    288 S   0.0  0.1   0:00.00 echo "find"
  904 root      20   0    1468   1040    288 S   0.0  0.1   0:00.00 bash
  905 root      20   0    1468   1044    288 S   0.0  0.1   0:00.00 sh
  906 root      20   0   21952   2376   2000 R   0.0  0.2   0:00.00 top -c
  958 root      20   0  208088   4096   1652 S   0.0  0.4   0:11.31 /etc/3proxy/3proxy /etc/3proxy/3proxy.cfg
  959 root      20   0    4244   1388   1292 S   0.0  0.1   0:00.00 /lib/startpar/startpar -f -- 3proxyinit


Now this is what the process is doing when checking lsof

Code:
COMMAND   PID USER   FD   TYPE    DEVICE SIZE/OFF      NODE NAME
iotgqonca 862 root  cwd    DIR    253,17     4096         2 /
iotgqonca 862 root  rtd    DIR    253,17     4096         2 /
iotgqonca 862 root  txt    REG    253,17   625878    524595 /usr/bin/iotgqoncac
iotgqonca 862 root    0u   CHR       1,3      0t0         6 /dev/null
iotgqonca 862 root    1u   CHR       1,3      0t0         6 /dev/null
iotgqonca 862 root    2u   CHR       1,3      0t0         6 /dev/null
iotgqonca 862 root    3u  IPv4 218819195      0t0       TCP fmmdff.hkdns.co.za:52048->1.1.1.1:ftp (ESTABLISHED)
iotgqonca 862 root    4u   raw                0t0 227336172 00000000:00FF->00000000:0000 st=07
iotgqonca 862 root    5u   raw                0t0 227336173 00000000:00FF->00000000:0000 st=07


and traffic of the Node itself:

eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 0c:c4:7b:4f:67:84 txqueuelen 1000 (Ethernet)
RX packets 392974720 bytes 85299071547 (79.4 GiB)
RX errors 0 dropped 54039 overruns 0 frame 0
TX packets 2296366372 bytes 1611520478665 (1.4 TiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xc7200000-c727ffff


Any ideas why traffic on LXC containers is so strange? This is the second client this occured with now.
 
that is the total traffic that goes in the container not the bandwidth, so if the container is long running, that number is not that unrealistic
 
Well the container is on for 1 day and did that amount of traffic as per proxmox console. Though the node itself is not calculating that and our data center switch is not seeing that traffic aswell. You state its not bandwidth but traffic. Define Traffic then. Thought Traffic was the amount of data going in and out. And Bandwidth was the limit set.

But somehow within the container it is see that traffic.

For example I see the container running something like cat /etc/resolv.conf using full core so in top uses 100% usage. Looks so suspicious but thats not our problem

Our problem is why is the calculating that as traffic IN - Container cannot get traffic in of 37.34TB because it is limited on the network port to 1.25MB/s. If I look at it and work it out on a online calculator it would take 324 days to do that much traffic on a port that is limited to 1.25MB/s in our out.
 
can you post the output of
Code:
cat /proc/net/dev
?
 
Code:
root@vz-cpt-3-ssd:~# cat /proc/net/dev
Inter-|   Receive                                                |  Transmit
 face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets errs drop fifo colls carrier compressed
tap171i0: 230251242 1833051    0    0    0     0          0         0 3775122083 6499801    0    0    0     0       0          0
veth164i0: 4681844   32993    0    0    0     0          0         0 87885174 1087488    0    0    0     0       0          0
veth170i0: 193925454944 152794616    0  820    0     0          0         0 11031346267 167823268    0    3    0     0       0          0
    lo: 667925469 1433375    0    0    0     0          0         0 667925469 1433375    0    0    0     0       0          0
 vmbr1: 424703884459 268319942    0    0    0     0          0         0 2969350626364 118586766    0    0    0     0       0          0
  eno2: 445684569401 529279141    0    0   85     0          0   2993916 3098199585160 2070843714    0    0    0     0       0          0
veth122i0: 36260596661958 39179107977    0    0    0     0          0         0 92635034 1107168    0    0    0     0       0          0
veth169i0: 48709317  318335    0    0    0     0          0         0 117774009 1361175    0    0    0     0       0          0
tap148i0: 66414308761 63119532    0    0    0     0          0         0 3998838550 23523881    0  991    0     0       0          0
veth109i0: 37421165058 22845904    0    0    0     0          0         0 5714357077 26538168    0  148    0     0       0          0
tap168i0: 5861318036 7422479    0    0    0     0          0         0 4470586924 20410130    0    0    0     0       0          0
veth167i0: 30901773  259118    0    0    0     0          0         0 110555713 1312827    0    0    0     0       0          0
 vmbr0: 3523880171 14661693    0    0    0     0          0         0 2515064815 4880986    0    0    0     0       0          0
  eno1: 86513215203 399569472    0 54443    0     0          0   3238408 1667282551048 2356623293    0    0    0     0       0          0
veth173i0: 7372287747 6986759    0    0    0     0          0         0 11343273388 13803709    0    0    0     0       0          0
veth134i0: 437307878 1396490    0    0    0     0          0         0 887098293 9812309    0    0    0     0       0          0
tap111i0: 962916552 3551053    0    0    0     0          0         0 1561129144 14326982    0    0    0     0       0          0
veth126i0: 44626215491458 48218014926    0    0    0     0          0         0 90873356 1103361    0    0    0     0       0          0
veth142i0: 7656793800 4733345    0    0    0     0          0         0 1175043866 7159273    0    0    0     0       0          0
veth159i0: 35241449715447 38078182697    0 13610    0     0          0         0 149322228 1433601    0    0    0     0       0          0
tap112i0: 6430489821 8947845    0    0    0     0          0         0 3786882221 18277707    0    0    0     0       0          0
veth172i0: 29085072  254174    0    0    0     0          0         0 412148206 3477280    0    0    0     0       0          0
tap1000i0: 1355679651 13354790    0    0    0     0          0         0 2241797203 24862187    0    0    0     0       0          0
tap161i0: 16764618666 14914309    0    0    0     0          0         0 10230460219 32689960    0    0    0     0       0          0
veth163i0:    5502     131    0    0    0     0          0         0 21158296  263531    0    0    0     0       0          0
tap114i0: 1374113127 4421727    0    0    0     0          0         0 2227280947 15353300    0    0    0     0       0          0
 
veth126i0: 44626215491458 48218014926 0 0 0 0 0 0 90873356 1103361 0 0 0 0 0 0
this line says the kernel says that the host received ~40TB of traffic from that interface that belongs to vmid 126
i cannot say why this is this high but we only have the information the kernel gives us

are you sure the container does not send that much data ? even to the host or other vms ?
 
Yip

Very sure as I can see it what the node has here:

eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 0c:c4:7a:4f:06:84 txqueuelen 1000 (Ethernet)
RX packets 403291348 bytes 86965611033 (80.9 GiB)
RX errors 0 dropped 54691 overruns 0 frame 0
TX packets 2392999399 bytes 1700780371956 (1.5 TiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xc7200000-c727ffff

AS you can see the TX is 1.5TB and the RX 80.9GB

However on the switch side itself it shows this for the VM. I highlighted half of the IP that is causing it as can be seen on the image below and the traffic it has done.

Untitled2.png

Now if it for example did the traffic then that would mean the limiting does not work on the network resoruce section as per the image below:

Untitled.png

So in essence something is is wrong somewhere because doing that much traffic would take almost 1 full year when set on 1.25MB/s
 
again this also includes traffic to the bridge and to other containers

also can you post your pveversion -v ?
 
root@vz-cpt-3-ssd:~# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.17-1-pve)
pve-manager: 5.2-1 (running version: 5.2-1/0fcd7879)
pve-kernel-4.15: 5.2-1
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-31
libpve-guest-common-perl: 2.0-16
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-18
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-3
pve-firewall: 3.0-8
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-5
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-26
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.8-pve1~bpo9
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!