Proxmox reporting empty disk reads/writes for LXC containers

freak12techno

New Member
Dec 7, 2022
16
0
1
Greetings. I have issues with my LXC containers (all of them), when they report 0 disk reads/writes, like on a screenshot here:
1706104055450.png
I am using prometheus-pve-exporter for monitoring my Proxmox containers, and it takes the data from the same API the UI is taking it from, which basically makes it impossible for me to build alerts on big disk reads/writes.

From what I understood, these metrics are taken from cgroups, and apparently it's cgroups which is not reporting any disk reads/writes:
Bash:
proxmox@proxmox-1 ~ ❯ cat /sys/fs/cgroup/lxc/102/io.stat
8:48 rbytes=26578944 wbytes=0 rios=1426 wios=0 dbytes=0 dios=0
252:4 rbytes=26578944 wbytes=0 rios=1426 wios=0 dbytes=0 dios=0
and the rbytes/wbytes values are not changed overtime, while the container surely did some disk reads/writes.

I also found a somewhat relevant issue: https://forum.proxmox.com/threads/disk-i-o-graph-is-empty.134728/, but I am not using ZFS on my containers (all of my containers are set to use one mount endpoint on a separate SSD, and also I have 2 SSDs which are set as RAID 1 and are used to boot a Proxmox instance itself).

Not sure where to dig any further or anyone faced the same issue, can someone assist?
 
Hi,
the disk io stats are indeed taken from the cgroup and written into the rrd database by the pvestatd, see [0] and [1].

Is the pvestatd up and running, systemctl status pvestatd.service?
What is your Proxmox VE version pveversion -v? Please also share the container config pct config 102 --current and your storage config cat /etc/pve/storage.cfg.

[0] https://git.proxmox.com/?p=pve-mana...61481ccc4d47b5eaa3644dd17bff1fd3;hb=HEAD#l442
[1] https://git.proxmox.com/?p=pve-cont...90a2d940f0796e89a83aa48cadd8501d;hb=HEAD#l266

Edit: Also, please try to adjust your timespan to something smaller, e.g. day or hour, in order to see if there is truly no data there.
 
Last edited:
Hi,
the disk io stats are indeed taken from the cgroup and written into the rrd database by the pvestatd, see [0] and [1].

Is the pvestatd up and running, systemctl status pvestatd.service?
What is your Proxmox VE version pveversion -v? Please also share the container config pct config 102 --current and your storage config cat /etc/pve/storage.cfg.

[0] https://git.proxmox.com/?p=pve-mana...61481ccc4d47b5eaa3644dd17bff1fd3;hb=HEAD#l442
[1] https://git.proxmox.com/?p=pve-cont...90a2d940f0796e89a83aa48cadd8501d;hb=HEAD#l266

Edit: Also, please try to adjust your timespan to something smaller, e.g. day or hour, in order to see if there is truly no data there.


For timespan, I am missing these values for at least the last month, getting this on all my containers on this node (5 LXC containers)
1706114707143.png
except the one (data on both is for the last month):
1706114767124.png

Regarding everything else:
Bash:
proxmox@proxmox-1 ~ ❯ sudo systemctl status pvestatd.service
[sudo] password for proxmox:
● pvestatd.service - PVE Status Daemon
     Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled; preset: enabled)
     Active: active (running) since Sat 2023-12-02 07:04:09 MSK; 1 month 23 days ago
   Main PID: 2020 (pvestatd)
      Tasks: 1 (limit: 154218)
     Memory: 110.6M
        CPU: 9h 21min 38.438s
     CGroup: /system.slice/pvestatd.service
             └─2020 pvestatd

Jan 14 13:00:15 proxmox-1 pvestatd[2020]: auth key pair too old, rotating..
Jan 17 03:15:59 proxmox-1 systemd[1]: Reloading pvestatd.service - PVE Status Daemon...
Jan 17 03:15:59 proxmox-1 pvestatd[2961532]: send HUP to 2020
Jan 17 03:15:59 proxmox-1 pvestatd[2020]: received signal HUP
Jan 17 03:15:59 proxmox-1 pvestatd[2020]: server shutdown (restart)
Jan 17 03:15:59 proxmox-1 systemd[1]: Reloaded pvestatd.service - PVE Status Daemon.
Jan 17 03:15:59 proxmox-1 pvestatd[2020]: restarting server
Jan 21 13:00:15 proxmox-1 pvestatd[2020]: auth key pair too old, rotating..
Jan 22 13:00:15 proxmox-1 pvestatd[2020]: auth key pair too old, rotating..
Notice: journal has been rotated since unit was started, output may be incomplete.
proxmox@proxmox-1 ~ ❯ sudo pveversion -v
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
    LANGUAGE = (unset),
    LC_ALL = (unset),
    LC_CTYPE = "UTF-8",
    LC_TERMINAL = "iTerm2",
    LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
proxmox-ve: 8.1.0 (running kernel: 6.5.11-6-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
pve-kernel-5.15: 7.4-4
proxmox-kernel-6.5: 6.5.11-7
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
proxmox-kernel-6.5.11-6-pve-signed: 6.5.11-6
proxmox-kernel-6.5.11-5-pve-signed: 6.5.11-5
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
proxmox-kernel-6.2.16-19-pve: 6.2.16-19
proxmox-kernel-6.2.16-18-pve: 6.2.16-18
proxmox-kernel-6.2.16-15-pve: 6.2.16-15
proxmox-kernel-6.2.16-12-pve: 6.2.16-12
proxmox-kernel-6.2.16-10-pve: 6.2.16-10
proxmox-kernel-6.2.16-8-pve: 6.2.16-8
proxmox-kernel-6.2.16-6-pve: 6.2.16-7
pve-kernel-6.2.16-5-pve: 6.2.16-6
pve-kernel-5.15.108-1-pve: 5.15.108-2
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.107-1-pve: 5.15.107-1
pve-kernel-5.15.104-1-pve: 5.15.104-2
pve-kernel-5.15.102-1-pve: 5.15.102-1
pve-kernel-5.15.85-1-pve: 5.15.85-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.2-1
proxmox-backup-file-restore: 3.1.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-3
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.2.0
pve-qemu-kvm: 8.1.2-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1
proxmox@proxmox-1 ~ ❯ sudo pct config 102 --current
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
    LANGUAGE = (unset),
    LC_ALL = (unset),
    LC_CTYPE = "UTF-8",
    LC_TERMINAL = "iTerm2",
    LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
arch: amd64
cores: 1
features: nesting=1
hostname: decentr-validator
memory: 8192
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=F2:ED:0D:5F:5F:1F,ip=dhcp,ip6=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: disk2:vm-102-disk-0,size=200G
swap: 512
unprivileged: 1
proxmox@proxmox-1 ~ ❯ sudo cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content backup,vztmpl,iso

zfspool: local-zfs
    pool rpool/data
    content rootdir,images
    sparse 1

lvm: disk1
    vgname disk1
    content rootdir,images
    nodes proxmox-1
    shared 1

lvm: disk2
    vgname disk2
    content rootdir,images
    nodes proxmox-1
    shared 1

lvm: disk3
    vgname disk3
    content rootdir,images
    nodes proxmox-1
    shared 1

lvm: disk4
    vgname disk4
    content images,rootdir
    nodes proxmox-1
    shared 1

lvm: disk5
    vgname disk5
    content rootdir,images
    nodes proxmox-3
    shared 1

lvm: disk6
    vgname disk6
    content images,rootdir
    nodes proxmox-3
    shared 1

lvm: disk7
    vgname disk7
    content rootdir,images
    nodes proxmox-3
    shared 1

lvm: disk8
    vgname disk8
    content images,rootdir
    nodes proxmox-3
    shared 1

lvm: disk9
    vgname disk9
    content rootdir,images
    nodes proxmox-2
    shared 0

lvm: disk10
    vgname disk10
    content rootdir,images
    nodes proxmox-2
    shared 0

lvm: disk11
    vgname disk11
    content rootdir,images
    nodes proxmox-2
    shared 0

lvm: disk12
    vgname disk12
    content rootdir,images
    nodes proxmox-2
    shared 0

lvm: nvme9
    vgname nvme9
    content rootdir,images
    nodes proxmox-2
    shared 0

lvm: nvme5
    vgname nvme5
    content rootdir,images
    nodes proxmox-3
    shared 0

lvm: nvme10
    vgname nvme10
    content rootdir,images
    nodes proxmox-2
    shared 0

lvm: nvme2
    vgname nvme2
    content images,rootdir
    nodes proxmox-1
    shared 0

lvm: nvme3
    vgname nvme3
    content rootdir,images
    nodes proxmox-1
    shared 0

and just in case you need it, here's the config of the container that has some disk reads/writes:
Bash:
proxmox@proxmox-1 ~ ❯ sudo pct config 112 --current
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
    LANGUAGE = (unset),
    LC_ALL = (unset),
    LC_CTYPE = "UTF-8",
    LC_TERMINAL = "iTerm2",
    LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
arch: amd64
cores: 6
features: nesting=1
hostname: jackal-validator
memory: 32768
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=BC:24:11:65:BC:94,ip=dhcp,ip6=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: nvme2:vm-112-disk-0,size=450G
swap: 4096
unprivileged: 1
 
Thank you for the outputs, so it seems that this might be related to the underlying storage. I see that the LXC on storage disk2 does not show any IO, while the one on storage nvme2 does.

Please verify that this is related to the storage and not the container by e.g. migrating the containers rootfs to the nvme2 storage. How are the shared LVMs setup?
 
Thank you for the outputs, so it seems that this might be related to the underlying storage. I see that the LXC on storage disk2 does not show any IO, while the one on storage nvme2 does.

Please verify that this is related to the storage and not the container by e.g. migrating the containers rootfs to the nvme2 storage. How are the shared LVMs setup?
Just to add, the container with the storage on `nvme2` also doesn't show it correct - it should've had way more disk reads/writes, but it only has a few spikes, and for the last week it reports 0 disk reads/writes, while this is 100% not true.

I am unfortunately unable to migrate it, as my nvme2 storage is completely full. Is there anything else I can do to investigate?

> How are the shared LVMs setup?

What do you mean?
 
Is there anything else I can do to investigate?
Please try to see what raw data is returned when fetching the rrd data via the api, pvesh get /nodes/localhost/lxc/102/rrddata --timeframe hour.

What do you mean?
I meant, is this a LVM created directly on a disk or are you using iscsi or the like? I was wondering especially since you flagged the storage as shared, but only one node is listed to have access to it.
 
Please try to see what raw data is returned when fetching the rrd data via the api, pvesh get /nodes/localhost/lxc/102/rrddata --timeframe hour.


I meant, is this a LVM created directly on a disk or are you using iscsi or the like? I was wondering especially since you flagged the storage as shared, but only one node is listed to have access to it.
For both of the containers (the one missing the data completely within the last month, and the one that has some): https://pastebin.com/Ra43a8yi (can't post it here as it's too long), seems like both diskread and diskwrite is empty on both, while all other metrics are present.

> I meant, is this a LVM created directly on a disk or are you using iscsi or the like? I was wondering especially since you flagged the storage as shared, but only one node is listed to have access to it.

Yeah, as of my understanding it's a LVM storage, 1 storage per actual disk (used for containers, for Proxmox itself it's 2 RAID1 SSDs). For flagging storage as shared, I don't remember actually setting it, maybe it was a default value or so, but indeed this is something I am not using.
 
Last edited:
For both of the containers (the one missing the data completely within the last month, and the one that has some): https://pastebin.com/Ra43a8yi (can't post it here as it's too long), seems like both diskread and diskwrite is empty on both, while all other metrics are present.

> I meant, is this a LVM created directly on a disk or are you using iscsi or the like? I was wondering especially since you flagged the storage as shared, but only one node is listed to have access to it.

Yeah, as of my understanding it's a LVM storage, 1 storage per actual disk (used for containers, for Proxmox itself it's 2 RAID1 SSDs). For flagging storage as shared, I don't remember actually setting it, maybe it was a default value or so, but indeed this is something I am not using.
I tried to reproduce your issue without success on a LVM. Here is what I did so you might try to reproduce:
  1. Created a fresh LVM backed storage on a fresh disk.
  2. Moved a test LXC containers rootfs onto the newly created storage.
  3. Installed fio inside the container.
  4. Monitored the LXC IO on the host for the container via watch -n 1 -d cat /sys/fs/cgroup/lxc/101/io.stat, note the VMID in the path which must be adapted to yours.
  5. Produced some write IO inside the LXC via fio --name=randwrite --ioengine=libaio --rw=randwrite --direct=0 --size=1G.
  6. Checked the collected stats, which might take some time to catch up, pvesh get /nodes/localhost/lxc/101/rrddata --timeframe hour, again note the VMID.
 
I tried to reproduce your issue without success on a LVM. Here is what I did so you might try to reproduce:
  1. Created a fresh LVM backed storage on a fresh disk.
  2. Moved a test LXC containers rootfs onto the newly created storage.
  3. Installed fio inside the container.
  4. Monitored the LXC IO on the host for the container via watch -n 1 -d cat /sys/fs/cgroup/lxc/101/io.stat, note the VMID in the path which must be adapted to yours.
  5. Produced some write IO inside the LXC via fio --name=randwrite --ioengine=libaio --rw=randwrite --direct=0 --size=1G.
  6. Checked the collected stats, which might take some time to catch up, pvesh get /nodes/localhost/lxc/101/rrddata --timeframe hour, again note the VMID.

Okay, I tried this:
- I see when monitoring io.stat that the rbytes/wbytes values change
- when seeing the CT stat on Proxmox, it displays it correctly:
1706300633541.png
- pvesh seems to also have this data: https://pastebin.com/NN7iiKz3

Not sure what's the difference between this CT and the other ones I've created before, as I can't recall touching them in any way, while all of these should have at least some disk/writes, for example, this CT uses 1tb of bandwidth a day overall and it should produce a lot of disk reads/writes, while it has all zeroes:
1706300889509.png
One thing to note, all other metrics (such as CPU/RAM/bandwidth) are stored normally, it's only disk reads/writes I have issues with.
 
Not sure what's the difference between this CT and the other ones I've created before, as I can't recall touching them in any way, while all of these should have at least some disk/writes, for example, this CT uses 1tb of bandwidth a day overall and it should produce a lot of disk reads/writes, while it has all zeroes:
Can you try to install fio also inside one of these containers and produce IO load for testing. Maybe write caches are limiting the required IO so you do not see them?
 
Can you try to install fio also inside one of these containers and produce IO load for testing. Maybe write caches are limiting the required IO so you do not see them?
Just did, on the CT with id=112 (the one that was reporting some activity previously): it seems to not register these new reads/writes at all, the output of io.stat for this container stays the same and consequently rrd doesn't read any of this activity to show on Disk IO graph):

1706532100584.png

I also did run fio once more on CT with id=115 (the one I used before) just to be sure it's not a problem that occurs after some time: it seems to still update cgroups info so io.stat is updated.
 
@Chris any ideas what else I can do to investigate why this keeps happening?
Since you now have 2 containers using the same storage, one producing the desired output, the other-one does not, I would try to figure out the differences of these 2 containers (pct config, OS version inside container, config, ecc.) and see if and when the IO stats disappear for a the test container.
 
I don't want to divert the topic of the original post, but I wanted to mention that I'm currently experiencing a similar issue. I am unable to see Write IO from any of my LXC. I have read through all the replies and have even tried some tests like moving the LXC to a different server and container, but the issue persists. Both VMs and LXCs are on the same container, and the only IO that is not reporting is the write IO from the LXC.

Any suggestion would be appreciated and I hope it can help the OP at the same time...

Edit: Typo
 
Last edited:
@Chris do you have any tips on where should I go from here? not sure I have enough experience to see on what to check, so I am kinda stuck
As already suggested, check differences in pct config (e.g. you have the nesting option enabled, check if that has an influence), OS version inside container, configs inside the container (are you writing to the expected disk or maybe to a ramfs/tmpfs?), ecc.
 
I don't want to divert the topic of the original post, but I wanted to mention that I'm currently experiencing a similar issue. I am unable to see Write IO from any of my LXC. I have read through all the replies and have even tried some tests like moving the LXC to a different server and container, but the issue persists. Both VMs and LXCs are on the same container, and the only IO that is not reporting is the write IO from the LXC.

Any suggestion would be appreciated and I hope it can help the OP at the same time...

Edit: Typo
Did you already try to do intensive sync writes to the disk via fio as suggested above. Further, you will have to provide the output requested above also from your side in order to be able to help.

When you say you see no write IO, does that mean you see read IO? Or none?
 
Did you already try to do intensive sync writes to the disk via fio as suggested above. Further, you will have to provide the output requested above also from your side in order to be able to help.

When you say you see no write IO, does that mean you see read IO? Or none?
Hi Chris,

Yes, already did... what puzzled me and it's a mistake on my part to mention it in my previous message, is that everything was working fine up until I upgrade the RAM. Once I power the server, and all the VMs and LXC came back online, the IO Write from the LXC was not being reported. Everything else work as intended (and yes, even the IO Read from the LXC).

If there's specific debug you would like to investigate, please let me know. Anything to help this situation in any way.

Oh I forgot... the only way to get the LXC Write IO to report again is with a new one, which is using the same config than all the other one (privilege, nested, etc.) and in the same storage.

Cheers!
 
Hi Chris,

Yes, already did... what puzzled me and it's a mistake on my part to mention it in my previous message, is that everything was working fine up until I upgrade the RAM. Once I power the server, and all the VMs and LXC came back online, the IO Write from the LXC was not being reported. Everything else work as intended (and yes, even the IO Read from the LXC).

If there's specific debug you would like to investigate, please let me know. Anything to help this situation in any way.

Oh I forgot... the only way to get the LXC Write IO to report again is with a new one, which is using the same config than all the other one (privilege, nested, etc.) and in the same storage.

Cheers!
Hi,
that is odd, but bad RAM can have a lot of strange side effects, so please run an extensive memory test.
Further, as requested please provide more details about your setup, e.g. your Proxmox VE pveversion -v, the storage config cat /etc/pve/storage.cfg, and the container config pct config <VMID>. Is the issue related to the kernel version? Try booting selecting an older kernel and see if the issue persists.
 
Hi,
that is odd, but bad RAM can have a lot of strange side effects, so please run an extensive memory test.
Further, as requested please provide more details about your setup, e.g. your Proxmox VE pveversion -v, the storage config cat /etc/pve/storage.cfg, and the container config pct config <VMID>. Is the issue related to the kernel version? Try booting selecting an older kernel and see if the issue persists.
Hi Chris,

Here's the output you were requesting...

Code:
proxmox-ve: 8.1.0 (running kernel: 6.5.13-1-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.5.13-1-pve-signed: 6.5.13-1
proxmox-kernel-6.5: 6.5.13-1
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
ceph-fuse: 17.2.7-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.1
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.4-1
proxmox-backup-file-restore: 3.1.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-4
pve-firewall: 5.0.3
pve-firmware: 3.9-2
pve-ha-manager: 4.0.3
pve-i18n: 3.2.0
pve-qemu-kvm: 8.1.5-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve2

Code:
dir: local
        path /var/lib/vz
        content iso,vztmpl
        shared 0

zfspool: VMs
        pool VMs
        content images,rootdir
        mountpoint /VMs
        nodes GLRPVESRV,GLRPVESRV-TEST
        sparse 1

nfs: VMBackup
        export /volume1/VMBackup
        path /mnt/pve/VMBackup
        server xxx.xxx.xxx.xxx
        content backup

Code:
LXC 101
arch: amd64
cores: 1
description: <div align='center'><img src='https%3A//icons.iconarchive.com/icons/thebadsaint/my-mavericks-part-2/128/Plex-icon.png'/>%0A%0A# Plex Media Server%0A
features: mount=nfs
hostname: GLRPlex
memory: 2048
net0: name=eth0,bridge=vmbr0,hwaddr=2E:31:2D:87:D9:55,ip=dhcp,type=veth
onboot: 1
ostype: debian
parent: Stable_Nov15
rootfs: VMs:subvol-101-disk-0,size=64G
startup: order=3
swap: 2048
tags: plex-media-server;production

LXC 102
arch: amd64
cores: 2
description: <div align='center'><a href='https%3A//Helper-Scripts.com' target='_blank' rel='noopener noreferrer'><img src='https%3A//raw.githubusercontent.com/tteck/Proxmox/main/misc/images/logo-81x112.png'/></a>%0A%0A# HomebridgeOS%0A
hostname: GLRHBOS
memory: 1024
net0: name=eth0,bridge=vmbr0,hwaddr=DE:A9:D3:0D:14:62,ip=dhcp,type=veth
onboot: 1
ostype: debian
parent: Stable_Nov15
rootfs: VMs:subvol-102-disk-0,size=64G
startup: order=3
swap: 2048
tags: homebridge-server;production

LXC 103
arch: amd64
cores: 2
description: <div align='center'><a href='https%3A//Helper-Scripts.com' target='_blank' rel='noopener noreferrer'><img src='https%3A//raw.githubusercontent.com/tteck/Proxmox/main/misc/images/logo-81x112.png'/></a>%0A%0A# Invoice Ninja%0A
features: nesting=1
hostname: GPInvNinja
memory: 2048
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=BC:24:11:70:1D:D2,ip=dhcp,type=veth
onboot: 1
ostype: debian
parent: InvNinja_FU
rootfs: VMs:subvol-103-disk-0,size=64G
startup: order=3
swap: 1024
tags: invoice-ninja;production
unprivileged: 1

LXC 105
arch: amd64
cores: 2
description: <div align='center'><a href='https%3A//Helper-Scripts.com' target='_blank' rel='noopener noreferrer'><img src='https%3A//raw.githubusercontent.com/tteck/Proxmox/main/misc/images/logo-81x112.png'/></a>%0A%0A# InfluxDB LXC%0A
features: nesting=1
hostname: GLRInfluxDB
memory: 2048
net0: name=eth0,bridge=vmbr0,hwaddr=BC:24:11:DD:23:92,ip=dhcp,type=veth
onboot: 1
ostype: debian
rootfs: VMs:subvol-105-disk-0,size=64G
startup: order=1
swap: 512
tags: influxdb;production
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.cgroup2.devices.allow: c 188:* rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/serial/by-id  dev/serial/by-id  none bind,optional,create=dir
lxc.mount.entry: /dev/ttyUSB0       dev/ttyUSB0       none bind,optional,create=file
lxc.mount.entry: /dev/ttyUSB1       dev/ttyUSB1       none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM0       dev/ttyACM0       none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM1       dev/ttyACM1       none bind,optional,create=file

LXC 107
arch: amd64
cores: 1
description: <div align='center'><a href='https%3A//Helper-Scripts.com' target='_blank' rel='noopener noreferrer'><img src='https%3A//raw.githubusercontent.com/tteck/Proxmox/main/misc/images/logo-81x112.png'/></a>%0A%0A  # Grafana LXC%0A
features: nesting=1
hostname: GLRGrafana
memory: 1024
net0: name=eth0,bridge=vmbr0,hwaddr=BC:24:11:3C:E0:33,ip=dhcp,type=veth
onboot: 1
ostype: alpine
rootfs: VMs:subvol-107-disk-0,size=16G
startup: order=3
swap: 512
tags: grafana-server;production
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.cgroup2.devices.allow: c 188:* rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/serial/by-id  dev/serial/by-id  none bind,optional,create=dir
lxc.mount.entry: /dev/ttyUSB0       dev/ttyUSB0       none bind,optional,create=file
lxc.mount.entry: /dev/ttyUSB1       dev/ttyUSB1       none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM0       dev/ttyACM0       none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM1       dev/ttyACM1       none bind,optional,create=file

If there's anything else I can provide or that you want me to run locally to help troubleshoot, let me know.

Cheers!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!