[FEATURE REQUEST] Web GUI for cloud-init MTU configuration

patchouli

New Member
Jul 2, 2024
4
1
3
patchouliknowled.ge
I want to configure the MTU of VMs with cloud-init, but there is no Web GUI to apply MTU value into automatically generated configs.
I don't want to manage custom config files myself for that.
Please let us configure cloud-init MTU value with Web GUI.
 
Hi!

Hm, is there any benefit to set the MTU in the CloudInit config file instead of overwriting the value at the VM's configured network device itself?

In any way, if you'd like someone to discuss and work on this feature, I'd like you to create a feature request at our Bugzilla [0]. I think it'd be easy to add it as a configuration option, but I'd still like to see the benefit from setting this in the CloudInit config over the VM's network device configuration.

[0] https://bugzilla.proxmox.com/enter_bug.cgi?product=pve&component=Web UI
 
@dakralex , personally, I believe that more robust UI support for custom CI would be more beneficial in the long run than gradually adding individual options to the base CI.

That said, a custom MTU setting is needed when VMs use VLANs; otherwise, the additional VLAN header may cause the packet size to exceed the underlying MTU (1500+4). In our OpenStack environment, we set the VM’s MTU slightly smaller to ensure correctness.

PS just realized that you mentioned the hardware/nic setting, rather than inheriting from parent interface. I missed it initially as we don't have to set it in our daily routine. You are correct that there is a way already to set custom MTU.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: Johannes S
@dakralex , personally, I believe that more robust UI support for custom CI would be more beneficial in the long run than gradually adding individual options to the base CI.
Second that, there's potential to improve the UI for cloud-init users in Proxmox VE.

While replying to this thread earlier, a simple solution that came to mind was to either allow users CRUDing cloud-init snippets in the UI directly or let them select from a predefined pool of options and only show those which they are interested in configuring. I think if there's enough need, the second would be a good idea from a UX perspective, but would be a substantial maintenance burden to keep PVE's options up-to-date with cloud-init's options.
 
  • Like
Reactions: Johannes S
Hi!

Hm, is there any benefit to set the MTU in the CloudInit config file instead of overwriting the value at the VM's configured network device itself?

In any way, if you'd like someone to discuss and work on this feature, I'd like you to create a feature request at our Bugzilla [0]. I think it'd be easy to add it as a configuration option, but I'd still like to see the benefit from setting this in the CloudInit config over the VM's network device configuration.

[0] https://bugzilla.proxmox.com/enter_bug.cgi?product=pve&component=Web UI
I already set mtu=1 (thus inherit bridge MTU, which is 8950 in my case) for the VirtIO network device in the VM hardware section, but when I use CI to set up network config, the interface is configured with the default MTU 1500.

In my case, my guest VM is a Proxmox Backup Server with manually-installed cloud-init package. In that VM, cloud-init network configuration is done with a CI-generated config file which is at /etc/network/interfaces.d/50-cloud-init :
Code:
# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
auto lo
iface lo inet loopback
    dns-nameservers 192.168.aaa.bbb
    dns-search [REDACTED]

auto eth0
iface eth0 inet static
    address 172.16.xxx.yyy/24
    gateway 172.16.xxx.zzz

And the result is:
Code:
root@guest:~# ip -d link list eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc cake state UP mode DEFAULT group default qlen 1000
    link/ether bc:24:11:92:8f:d5 brd ff:ff:ff:ff:ff:ff promiscuity 0  allmulti 0 minmtu 68 maxmtu 8950 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 parentbus virtio parentdev virtio2
    altname enp6s18
maxmtu 8950 and mtu 1500.

So, even if the VirtIO network device does support maximum MTU of 8950, it is actually never configured to be 8950 without manual intervention or a CI custom config file.

It would be nice if I can configure interface MTU of the CI network-config [0] from web GUI, or at least inherit it from the hardware MTU configuration.

[0] https://cloudinit.readthedocs.io/en/latest/reference/network-config-format-v1.html#physical-example
 
Last edited:
So, even if the VirtIO network device does support maximum MTU of 8950, it is actually never configured to be 8950 without manual intervention or a CI custom config file.

I was curious, so I gave it a quick try:

Code:
qm create 9030 --memory 4096 --name vm9030 --socket 1 --onboot no --cpu cputype=host
qm importdisk 9030 /mnt/pve/bbnas/template/qcow/ubuntu-24.04-noble-server-cloudimg-amd64.img blockbridge-nvme --format raw
qm set 9030 --scsihw virtio-scsi-single --scsi0 blockbridge-nvme:vm-9030-disk-0,aio=native,iothread=1
qm set 9030 -net0 virtio,bridge=vmbr0,firewall=1,mtu=1000
qm set 9030 --scsi1 blockbridge-nvme:cloudinit
qm set 9030 --boot c --bootdisk scsi0
qm set 9030 --serial0 socket --vga virtio
qm set 9030 -ipconfig0 ip=dhcp
qm set 9030 --agent 1

Checking the MTU when the VM has booted I can see the MTU is enforced:
Code:
ip -d link list eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1000 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether bc:24:14:a2:99:98 brd ff:ff:ff:ff:ff:ff promiscuity 0  allmulti 0 minmtu 68 maxmtu 1000 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 parentbus virtio parentdev virtio4 
    altname enp0s18


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I was curious, so I gave it a quick try:

Code:
qm create 9030 --memory 4096 --name vm9030 --socket 1 --onboot no --cpu cputype=host
qm importdisk 9030 /mnt/pve/bbnas/template/qcow/ubuntu-24.04-noble-server-cloudimg-amd64.img blockbridge-nvme --format raw
qm set 9030 --scsihw virtio-scsi-single --scsi0 blockbridge-nvme:vm-9030-disk-0,aio=native,iothread=1
qm set 9030 -net0 virtio,bridge=vmbr0,firewall=1,mtu=1000
qm set 9030 --scsi1 blockbridge-nvme:cloudinit
qm set 9030 --boot c --bootdisk scsi0
qm set 9030 --serial0 socket --vga virtio
qm set 9030 -ipconfig0 ip=dhcp
qm set 9030 --agent 1

Checking the MTU when the VM has booted I can see the MTU is enforced:
Code:
ip -d link list eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1000 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether bc:24:14:a2:99:98 brd ff:ff:ff:ff:ff:ff promiscuity 0  allmulti 0 minmtu 68 maxmtu 1000 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 parentbus virtio parentdev virtio4
    altname enp0s18


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I quickly tested again with my PBS VM. This time, I tested twice with the device MTU of 1499 and 1501.

I checked whether the generated network-config file changes depending on the device MTU value, by mounting /dev/sr0 (cloudinit drive).
Regardless of whether the MTU is 1499 or 1501, the network-config file is (re)generated as:
Code:
version: 1
config:
    - type: physical
      name: eth0
      mac_address: '[REDACTED]'
      subnets:
      - type: static
        address: '172.16.xxx.yyy'
        netmask: '255.255.255.0'
        gateway: '172.16.xxx.zzz'
    - type: nameserver
      address:
      - '192.168.aaa.bbb'
      search:
      - '[REDACTED]'

However, the interface MTU values are different. If a device MTU is set below 1500, CI (or maybe the underlying 'renderers') honor the device MTU WITHOUT an explicit MTU configuration. That is, when a device MTU is 1499, ip -d link list eth0 prints maxmtu as 1499 and mtu as 1499.
Conversely, if a device MTU is set over 1500, CI just creates an interface with MTU 1500. In my test case, ip -d link list eth0 prints maxmtu as 1501 but mtu as 1500.

So it seems to be a problem in situations where jumbo frames are used.
 
Last edited:
  • Like
Reactions: bbgeek17