rrddata unit of measurement

dmilbert

New Member
Jul 13, 2021
11
3
3
24
Hi,

I am currently working on a script to track the traffic usage for the VM's in Proxmox, I have however come to a bit of a holt due to a lack of information.

I am using the following command to get the netin and netout values for the past hour in minute intervals:

Bash:
root@prx001:/var/log/pn# pvesh get /nodes/prx010/qemu/2235/rrddata --timeframe=hour --output-format=json

which then gives me the following output:
JSON:
   {
    "cpu": 0.321932621746476,
    "disk": 0,
    "diskread": 4174774.61333333,
    "diskwrite": 343985.493333333,
    "maxcpu": 8,
    "maxdisk": 805306368000,
    "maxmem": 15032385536,
    "mem": 8468508117.03333,
    "netin": 66113.6333333333,
    "netout": 391885.466666667,
    "time": 1654613640
  },
  {
    "cpu": 0.241451666020525,
    "disk": 0,
    "diskread": 2348299.94666667,
    "diskwrite": 296543.573333333,
    "maxcpu": 8,
    "maxdisk": 805306368000,
    "maxmem": 15032385536,
    "mem": 8475974062.56667,
    "netin": 71023.9733333333,
    "netout": 130698.82,
    "time": 1654613700
  },
  {
    "cpu": 0.229832867969625,
    "disk": 0,
    "diskread": 2593256.10666667,
    "diskwrite": 206160.213333333,
    "maxcpu": 8,
    "maxdisk": 805306368000,
    "maxmem": 15032385536,
    "mem": 8612847308.9,
    "netin": 96769.06,
    "netout": 392454.4,
    "time": 1654613760
  },

My question is what Unit of measurement (i.e. Byte, KB, MB etc.) is used for the netin and netout metrics?
I have not found this info in the API documentation or in other forum posts sadly.
 
Hi,
this took some time to figure out :).

First the pvestatd daemon needs to update the rrd database. This alls starts here:

https://git.proxmox.com/?p=pve-mana...c2b20fa695c0434d21f851030c017ba7;hb=HEAD#l208

this calls sub vmstatus
https://git.proxmox.com/?p=qemu-ser...2e2b2a84f04b7a04800deb80624c39f;hb=HEAD#l2830

which calls for the netin/netout sub read_proc_net_dev which reads /proc/net/dev
https://git.proxmox.com/?p=pve-comm...28df1cabde93685ed4a9c9dab11fe582;hb=HEAD#l331

This is written to a rrd file particular some path looking like this /var/lib/rrdcached/db/pve2-vm/<VMID>

if you look at one of those files with rrdtool info /var/lib/rrdcached/db/pve2-vm/104


The file at /proc/net/dev looks like this, and the code is extracting the byte values.
Code:
cat /proc/net/dev
Inter-|   Receive                                                |  Transmit
 face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets errs drop fifo colls carrier compressed
    lo: 66382002  121163    0    0    0     0          0         0 66382002  121163    0    0    0     0       0          0
  eno1: 265237781  315963    0  457    0     0          0     11628 24841376  190911    0    0    0     0       0          0
 vmbr0: 257668199  232134    0    0    0     0          0     11199 22979973  124985    0    0    0     0       0          0
 vmbr1:       0       0    0    0    0     0          0         0        0       0    0    0    0     0       0          0
....


Therefore, this value should be in bytes :)
 
Hi,

Thanks for the answer, I took a look at the info you posted.
To my knowledge the Byte values in /proc/net/dev are aggregate meaning they are ever increasing.

How does Proxmox/rrddata go about calculating the byte value that is reported by the API?
Does it save the previous transmit values and then just subtract the two to calculate the data transmitted since last check?

Another note the /proc/net/dev values get reset up System reboot, how does Proxmox deal with this when the VM is rebooted from within the machine since the VM-Process on the Hypervisor is not affected/ended when a reboot is performed from within the VM?
 
Update: Values reported by the API are Incorrect when it comes to the netin and netout values.

So I conducted my own tests based on the info provided above that rrddata/the API is getting the values from /proc/net/dev.

On one of the VM's I executed the following command cat /proc/net/dev && sleep 60 && cat /proc/net/dev exactly at "2022/06/10 10:01:00".
OUTPUT:
Code:
[root@dmilbert ~]# cat /proc/net/dev && sleep 60 && cat /proc/net/dev
Inter-|   Receive                                                |  Transmit
 face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets errs drop fifo colls carrier compressed
 as0t2:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
 as0t7:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
 as0t4:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
 as0t9:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
 as0t1:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
 as0t6:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
  eth0: 50153927  833449    0   35    0     0          0         0   412618    2635    0    0    0     0       0          0
    lo:  271466    1356    0    0    0     0          0         0   271466    1356    0    0    0     0       0          0
 as0t3:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
as0t10:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
 as0t8:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
 as0t0:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
 as0t5:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
as0t11:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
docker0:       0       0    0    0    0     0          0         0        0       0    0    0    0     0       0          0

Inter-|   Receive                                                |  Transmit
 face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets errs drop fifo colls carrier compressed
 as0t2:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
 as0t7:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
 as0t4:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
 as0t9:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
 as0t1:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
 as0t6:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
  eth0: 53189052  883986    0   37    0     0          0         0   423712    2724    0    0    0     0       0          0
    lo:  273396    1367    0    0    0     0          0         0   273396    1367    0    0    0     0       0          0
 as0t3:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
as0t10:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
 as0t8:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
 as0t0:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
 as0t5:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
as0t11:       0       0    0    0    0     0          0         0      144       3    0    0    0     0       0          0
docker0:       0       0    0    0    0     0          0         0        0       0    0    0    0     0       0          0

Just working with the Receive/netin values as a base the received bytes in 60s was 3,035,125 (+- 3 MB), according to the Proxmox API however:
Code:
root@prx001:~# pvesh get /nodes/prx001/qemu/343/rrddata --human-readable --timeframe=hour --noborder | sort -k 11 | tail -3
0.0219202660353848  0    254107.306666667 18817.7066666667 6      338228674560 4294967296 2639760208       50724.7683333333 1031.52833333333 1654848060
0.00986061864213649 0    1370301.44       21274.4533333333 6      338228674560 4294967296 2639909266.8     50288.9133333333 193.253333333333 1654848120
cpu                 disk diskread         diskwrite        maxcpu maxdisk      maxmem     mem              netin            netout           time

(to get the time/date easier use date -d @timestamp +%Y/%m/%d" "%T.)

The netin is 50288.9133333333 Bytes (+- 0.05 MB) which is obviously incorrect.

Is this a potential bug or am I missing something in the way Proxmox handles netin and netout?
 
Last edited:
Hi,
just the quick answer first :). Proxmox doesn't look into the VMs /proc/net/dev. It looks at /proc/net/dev on the host and checks the interfaces for a particular VM.

And now to anser the question regarding the unit question.
The other thing that those rrd files/setup does is average over time. I just tried this out with a VM that is not busy and started iperf with a bandwidth limit of 10 MBit/s. I got this

Code:
root@ella:/home/shrdlicka# pvesh get /nodes/11-cl1/qemu/103/rrddata --human-readable --timeframe=hour --noborder | sort -k 11 | tail -7
0.010672042006015   0    52428.8          26214.4          2      42949672960 4294967296 1565507515.73333 1849.715         91886.13         1654851720
0.0121819606988207  0    52428.8          20718.9333333333 2      42949672960 4294967296 1566202675.2     2943.34166666667 1320621.86       1654851780
0.0120710982008477  0    52428.8          13824            2      42949672960 4294967296 1568626892.8     2770.255         1319321.73666667 1654851840
0.012058835138514   0    52428.8          19872.4266666667 2      42949672960 4294967296 1566780142.93333 2687.77833333333 1319151.31333333 1654851900
0.0123190359951583  0    52428.8          23517.8666666667 2      42949672960 4294967296 1567755059.2     2696.74          1299520.08833333 1654851960
0.010048014320578   0    52428.8          21988.6933333333 2      42949672960 4294967296 1568249309.86667 1602.955         430446.656666667 1654852020

For the time it was running for the full minute, netout gets a value of 1,319,151.3 bytes/s aka 1,319,151.3 * 8 * 1024 * 1024 = 10.06 MBit/s. So the format of this fiedls is bytes/s.

In your case the netin value of 50,288.9 needs to be multiplied by 60 which would be about 3,017,334.
 
Hi,

Thank you for the answer it has cleared up a lot of confusion.

There is however one question remaining, if Proxmox/rrddata looks at the /proc/net/dev file on the Host to calculate a VM's traffic which interface is then used for the calculation, all of them or just one.

As an example sticking with the same VM, on the host in the /proc/net/dev file there are 4 interfaces listed for this VM (whereas inside the VM there are only two):
Code:
root@prx001:~# cat /proc/net/dev | grep 343
tap343i0: 36908187  269140    0    0    0     0          0         0 9060102378 148110853    0 19010    0     0       0          0
fwbr343i0: 6823565787 148564339    0    5    0     0          0    737174        0       0    0    0    0     0       0          0
fwpr343p0: 38478124  275302    0 6289    0     0          0         0 9101279756 148893101    0    0    0     0       0          0
fwln343i0: 9101279756 148893101    0 6289    0     0          0         0 38478124  275302    0    0    0     0       0          0

How is the AVG Byte/s calculated and where is the time i.e. average over 60s configured?
 
The magic happens here (https://git.proxmox.com/?p=qemu-ser...2e2b2a84f04b7a04800deb80624c39f;hb=HEAD#l2900 ):

Code:
...
next if $dev !~ m/^tap([1-9]\d*)i/;
...

It only selects the tap interfaces for counting :)

For the 60 second interval I'm guessing this is the line in the code that creates the database with a 60 second step interval:
https://git.proxmox.com/?p=pve-clus...0d39ea02118191c9cffbbc0b0b83a18;hb=HEAD#l1117

More info about the RRD database and this step you can find here https://oss.oetiker.ch/rrdtool/doc/rrdcreate.en.html#STEP,_HEARTBEAT,_and_Rows_As_Durations
 
  • Like
Reactions: datschlatscher
Hello!

I'm reviving this thread because i'm having some issues with understanding the /rrddata api and if it works as it should

I want to use it to track network bandwidth usage of our virtual machines.

To test it i downloaded a 1GB file from a speedtest service.

Code:
root@vero:~# wget https://speed.hetzner.de/1GB.bin
--2023-08-28 20:26:04--  https://speed.hetzner.de/1GB.bin
Resolving speed.hetzner.de (speed.hetzner.de)... 88.198.248.254, 2a01:4f8:0:59ed::2
Connecting to speed.hetzner.de (speed.hetzner.de)|88.198.248.254|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1048576000 (1000M) [application/octet-stream]
Saving to: ‘1GB.bin’

1GB.bin             100%[===================>]   1000M  13.1MB/s    in 72s

2023-08-28 20:27:17 (13.8 MB/s) - ‘1GB.bin’ saved [1048576000/1048576000]

Then i called the api from my php script and from the proxmox host itself using OP's command and the results were the same.

I calculated the total netin with the php script and got 17724138.437407 which is very far from 1048576000

From the proxmox console the network usage for this vm is 14MB at 22:27 and 4MB at 22:28.

While writing this post my eyes fell on the "Hour Average". by multiplying * 60 the netin sum the result is 1GB.

So this is most likely solved but i have another question, both the php api call and the pvesh call return not 60 but 70 results using the hour timeframe, how should i deal with this? I want to call this endpoint every hour, should i use the field "time" unix timestamp to filter every result that was created since the 00:00 of the previous hour?
 
So this is most likely solved but i have another question, both the php api call and the pvesh call return not 60 but 70 results using the hour timeframe, how should i deal with this? I want to call this endpoint every hour, should i use the field "time" unix timestamp to filter every result that was created since the 00:00 of the previous hour?
If you use PHP for example, you can use Carbon and run through the results via foreach-loop. Im doing it like:
PHP:
//calculate it per day
$traffic_in = 0;
$traffic_out = 0;

foreach($data as $d) {
    $date = Carbon::createFromTimestamp($d->time)
    if($data >= Carbon::today()) {
        $traffic_in += $d->netin;
        $traffic_out += $d->netout;
    }
}

echo $traffic_in; //returns all timeframes since today 00:00:00
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!