[SOLVED] Container status unknown after pve5->pve6 | blkio.throttle.io_service_bytes_recursive' - No such file or directory (500)

mikkelll

Member
Jun 30, 2020
10
2
8
33
Hi,

After upgrading from 5.3 -> 5.4 -> 6.2 I have now seen this error in different places.
Code:
can't open '/sys/fs/cgroup/blkio///lxc/101/blkio.throttle.io_service_bytes_recursive' - No such file or directory (500)

In the webgui the status of my containers are unknown, also the new containers created after the upgrade. The containers can start and are available/accesible.

I also see this error when running pct status 101.

I'm really no shark in proxmox, and I can't relate this error to anything else on the forum. If any more info is needed - say the word.

Thank you for reading.

Code:
root@pve:/# ls -al /sys/fs/cgroup/blkio///lxc/101
total 0
drwxr-xr-x 3 root root   0 Jul  1 18:13 .
drwxr-xr-x 6 root root   0 Jun 30 08:29 ..
-r--r--r-- 1 root root   0 Jul  1 18:13 blkio.io_merged
-r--r--r-- 1 root root   0 Jul  1 18:13 blkio.io_merged_recursive
-r--r--r-- 1 root root   0 Jul  1 18:13 blkio.io_queued
-r--r--r-- 1 root root   0 Jul  1 18:13 blkio.io_queued_recursive
-r--r--r-- 1 root root   0 Jul  1 18:13 blkio.io_service_bytes
-r--r--r-- 1 root root   0 Jul  1 18:13 blkio.io_service_bytes_recursive
-r--r--r-- 1 root root   0 Jul  1 18:13 blkio.io_serviced
-r--r--r-- 1 root root   0 Jul  1 18:13 blkio.io_serviced_recursive
-r--r--r-- 1 root root   0 Jul  1 18:13 blkio.io_service_time
-r--r--r-- 1 root root   0 Jul  1 18:13 blkio.io_service_time_recursive
-r--r--r-- 1 root root   0 Jul  1 18:13 blkio.io_wait_time
-r--r--r-- 1 root root   0 Jul  1 18:13 blkio.io_wait_time_recursive
-rw-r--r-- 1 root root   0 Jul  1 18:13 blkio.leaf_weight
-rw-r--r-- 1 root root   0 Jul  1 18:13 blkio.leaf_weight_device
--w------- 1 root root   0 Jul  1 18:13 blkio.reset_stats
-r--r--r-- 1 root root   0 Jul  1 18:13 blkio.sectors
-r--r--r-- 1 root root   0 Jul  1 18:13 blkio.sectors_recursive
-r--r--r-- 1 root root   0 Jul  1 18:13 blkio.throttle.io_service_bytes
-r--r--r-- 1 root root   0 Jul  1 18:13 blkio.throttle.io_serviced
-rw-r--r-- 1 root root   0 Jul  1 18:13 blkio.throttle.read_bps_device
-rw-r--r-- 1 root root   0 Jul  1 18:13 blkio.throttle.read_iops_device
-rw-r--r-- 1 root root   0 Jul  1 18:13 blkio.throttle.write_bps_device
-rw-r--r-- 1 root root   0 Jul  1 18:13 blkio.throttle.write_iops_device
-r--r--r-- 1 root root   0 Jul  1 18:13 blkio.time
-r--r--r-- 1 root root   0 Jul  1 18:13 blkio.time_recursive
-rw-r--r-- 1 root root   0 Jul  1 18:13 blkio.weight
-rw-r--r-- 1 root root   0 Jul  1 18:13 blkio.weight_device
-rw-r--r-- 1 root root   0 Jul  1 18:13 cgroup.clone_children
-rw-r--r-- 1 root root   0 Jul  1 18:13 cgroup.procs
-rw-r--r-- 1 root root   0 Jul  1 18:13 notify_on_release
drwxrwxr-x 2 root 100000 0 Jul  1 17:18 ns
-rw-r--r-- 1 root root   0 Jul  1 18:13 tasks

Code:
root@pve:/# pveversion --verbose
proxmox-ve: 6.2-1 (running kernel: 4.15.18-10-pve)
pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754)
pve-kernel-5.4: 6.2-3
pve-kernel-helper: 6.2-3
pve-kernel-5.4.44-1-pve: 5.4.44-1
pve-kernel-4.15: 5.4-19
pve-kernel-4.15.18-30-pve: 4.15.18-58
pve-kernel-4.15.18-10-pve: 4.15.18-32
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-3
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-8
pve-cluster: 6.1-8
pve-container: 3.1-8
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

In the WebGUI:
Screenshot from 2020-07-01 18-16-33.png
 

Attachments

  • Screenshot from 2020-07-01 18-16-33.png
    Screenshot from 2020-07-01 18-16-33.png
    54.4 KB · Views: 3
hi,

what does ls -lai /sys/fs/cgroup/blkio/lxc/ return?

do you see any directories lke 101-1 ?
 
I do not see anything like 101-1
Code:
root@pve:~# ls -lai /sys/fs/cgroup/blkio/lxc/
total 0
2883 drwxr-xr-x 5 root root 0 Jul  2 14:01 .
   1 dr-xr-xr-x 6 root root 0 Jul  2 09:08 ..
3587 drwxr-xr-x 3 root root 0 Jul  2 09:21 101
2915 drwxr-xr-x 3 root root 0 Jul  2 09:08 103
3043 drwxr-xr-x 3 root root 0 Jul  2 09:08 104
2905 -r--r--r-- 1 root root 0 Jul  2 14:01 blkio.io_merged
2913 -r--r--r-- 1 root root 0 Jul  2 14:01 blkio.io_merged_recursive
2906 -r--r--r-- 1 root root 0 Jul  2 14:01 blkio.io_queued
2914 -r--r--r-- 1 root root 0 Jul  2 14:01 blkio.io_queued_recursive
2901 -r--r--r-- 1 root root 0 Jul  2 14:01 blkio.io_service_bytes
2909 -r--r--r-- 1 root root 0 Jul  2 14:01 blkio.io_service_bytes_recursive
2902 -r--r--r-- 1 root root 0 Jul  2 14:01 blkio.io_serviced
2910 -r--r--r-- 1 root root 0 Jul  2 14:01 blkio.io_serviced_recursive
2903 -r--r--r-- 1 root root 0 Jul  2 14:01 blkio.io_service_time
2911 -r--r--r-- 1 root root 0 Jul  2 14:01 blkio.io_service_time_recursive
2904 -r--r--r-- 1 root root 0 Jul  2 14:01 blkio.io_wait_time
2912 -r--r--r-- 1 root root 0 Jul  2 14:01 blkio.io_wait_time_recursive
2898 -rw-r--r-- 1 root root 0 Jul  2 14:01 blkio.leaf_weight
2897 -rw-r--r-- 1 root root 0 Jul  2 14:01 blkio.leaf_weight_device
2888 --w------- 1 root root 0 Jul  2 14:01 blkio.reset_stats
2900 -r--r--r-- 1 root root 0 Jul  2 14:01 blkio.sectors
2908 -r--r--r-- 1 root root 0 Jul  2 14:01 blkio.sectors_recursive
2893 -r--r--r-- 1 root root 0 Jul  2 14:01 blkio.throttle.io_service_bytes
2894 -r--r--r-- 1 root root 0 Jul  2 14:01 blkio.throttle.io_serviced
2889 -rw-r--r-- 1 root root 0 Jul  2 14:01 blkio.throttle.read_bps_device
2891 -rw-r--r-- 1 root root 0 Jul  2 14:01 blkio.throttle.read_iops_device
2890 -rw-r--r-- 1 root root 0 Jul  2 14:01 blkio.throttle.write_bps_device
2892 -rw-r--r-- 1 root root 0 Jul  2 14:01 blkio.throttle.write_iops_device
2899 -r--r--r-- 1 root root 0 Jul  2 14:01 blkio.time
2907 -r--r--r-- 1 root root 0 Jul  2 14:01 blkio.time_recursive
2896 -rw-r--r-- 1 root root 0 Jul  2 14:01 blkio.weight
2895 -rw-r--r-- 1 root root 0 Jul  2 14:01 blkio.weight_device
2885 -rw-r--r-- 1 root root 0 Jul  2 14:01 cgroup.clone_children
2884 -rw-r--r-- 1 root root 0 Jul  2 14:01 cgroup.procs
2887 -rw-r--r-- 1 root root 0 Jul  2 14:01 notify_on_release
2886 -rw-r--r-- 1 root root 0 Jul  2 14:01 tasks
 
hmm okay.

i see here: 4.15.18-10-pve

seems like you've upgraded the packages but haven't rebooted for the new kernel to take effect?
 
As you can see I rebooted this morning.
Hmm.. anything I can do to make the new kernel take effect then?

- thanks a lot for your ideas btw.

Code:
root@pve:~# last reboot
reboot   system boot  4.15.18-10-pve   Thu Jul  2 09:08   still running
reboot   system boot  4.15.18-10-pve   Tue Jun 30 08:29 - 09:06 (2+00:37)
reboot   system boot  4.15.18-10-pve   Mon Jun 29 22:37 - 23:56  (01:19)
reboot   system boot  4.15.18-10-pve   Mon Jun 29 22:12 - 22:35  (00:23)
reboot   system boot  4.15.18-10-pve   Mon Jun  8 17:02 - 22:06 (21+05:03)
reboot   system boot  4.15.18-10-pve   Mon Jun  8 16:32 - 17:00  (00:28)
reboot   system boot  4.15.18-10-pve   Mon Jun  8 15:20 - 16:10  (00:50)

wtmp begins Thu Jun  4 22:10:29 2020
 
Hmm.. anything I can do to make the new kernel take effect then?
normally it will update the initramfs and fix the bootloader entry automatically with the apt hooks during the installation.
are you using UEFI boot ? ls /sys/firmware/efi/vars/ if the directory has any files it means yes.

if yes, then try running pve-efiboot-tool kernel list to see the available/loaded kernels. check the output to see which kernel is selected.

pve-efiboot-tool refresh will copy found kernels and create boot entries.


afterwards you should be booting into the new kernel.
 
in that case you can just try running update-grub and rebooting
 
Maybe it is obvious what to do next?
I rebooted again afterwards. Still get the same error.

Code:
root@pve:~# update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.4.44-1-pve
Found initrd image: /boot/initrd.img-5.4.44-1-pve
Found linux image: /boot/vmlinuz-4.15.18-30-pve
Found initrd image: /boot/initrd.img-4.15.18-30-pve
Found linux image: /boot/vmlinuz-4.15.18-10-pve
Found initrd image: /boot/initrd.img-4.15.18-10-pve
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
done

Code:
root@pve:~# pveversion --verbose
proxmox-ve: 6.2-1 (running kernel: 4.15.18-10-pve)
pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754)
pve-kernel-5.4: 6.2-3
pve-kernel-helper: 6.2-3
pve-kernel-5.4.44-1-pve: 5.4.44-1
pve-kernel-4.15: 5.4-19
pve-kernel-4.15.18-30-pve: 4.15.18-58
pve-kernel-4.15.18-10-pve: 4.15.18-32
ceph-fuse: 12.2.11+dfsg1-2.1+b1
(...)
(...)
 
can you post the output of find /boot ?
 
huh!

you have: /boot/grub/x86_64-efi and /boot/grub/i386-pc/. so there's two installations of grub in /boot (not sure how that happened), and i guess the one you're booting from isn't being correctly updated with the apt hook.

can you also post the contents of your /boot/grub/grub.cfg ? most likely there's something which isn't being updated correctly to boot the new vmlinuz and initrd
 
If only I remember what I did. It (PROBABLY)has something to do with me installing an SSD where there is room for an optical drive. Not being able to boot from it, I think there is some kind of 'redirect' from another (bootable) flash drive. Maybe that doesn't make any sense. :)

https://pastebin.com/Efi03cbr
 
I think there is some kind of 'redirect' from another (bootable) flash drive.

so which drive are you booting from? if there's something like that, that explains why it's stuck with the old kernel (since grub there isn't being updated)

can you post lsblk -f ?
 
Code:
root@pve:~# lsblk -f
NAME                         FSTYPE      LABEL UUID                                   FSAVAIL FSUSE% MOUNTPOINT
sda                          zfs_member                                                             
├─sda1                       zfs_member                                                             
└─sda2                       zfs_member  Harry 824699430555318541                                   
sdb                          zfs_member                                                             
├─sdb1                       zfs_member                                                             
└─sdb2                       zfs_member  Harry 824699430555318541                                   
sdc                                                                                                 
└─sdc1                       ext2              2961d709-c385-42ee-8678-391b9fa05864                 
sdd                          zfs_member                                                             
├─sdd1                       zfs_member                                                             
└─sdd2                       zfs_member  Harry 824699430555318541                                   
sde                          zfs_member                                                             
├─sde1                       zfs_member                                                             
└─sde2                       zfs_member  Harry 824699430555318541                                   
sdf                                                                                                 
├─sdf1                                                                                               
├─sdf2                       vfat              32AB-FD71                                             
└─sdf3                       LVM2_member       lYge4t-C2ry-Dxvu-EaHf-dK87-bc0H-QN2Vi3               
  ├─pve-swap                 swap              3c7682dd-0301-4c93-9867-f862ccfc1544                  [SWAP]
  ├─pve-root                 ext4              ceae415b-45f1-4fcb-a46b-a32c0f4d0e6c     39.2G    23% /
  ├─pve-data_tmeta                                                                                   
  │ └─pve-data-tpool                                                                                 
  │   ├─pve-data                                                                                     
  │   ├─pve-vm--102--disk--0 ext4              677aca8d-784e-42c5-a4ce-4506143e43c0                 
  │   ├─pve-vm--102--disk--1 ext4              7782f04f-d0cb-4e98-855e-66fb3e360ae9                 
  │   ├─pve-vm--103--disk--0 ext4              5134f59d-0d19-40ea-a07d-bf101ef1e523                 
  │   └─pve-vm--101--disk--0 ext4              e2c209b3-9749-436b-b3c6-0efd43c8e550                 
  └─pve-data_tdata                                                                                   
    └─pve-data-tpool                                                                                 
      ├─pve-data                                                                                     
      ├─pve-vm--102--disk--0 ext4              677aca8d-784e-42c5-a4ce-4506143e43c0                 
      ├─pve-vm--102--disk--1 ext4              7782f04f-d0cb-4e98-855e-66fb3e360ae9                 
      ├─pve-vm--103--disk--0 ext4              5134f59d-0d19-40ea-a07d-bf101ef1e523                 
      └─pve-vm--101--disk--0 ext4              e2c209b3-9749-436b-b3c6-0efd43c8e550                 
zd0                                                                                                 
├─zd0p1                                                                                             
└─zd0p2                      ext4              91cdcad4-14ba-4c63-8f83-860463d793c2
 
/dev/sdc1 sticks out the most to me. can you mount it and see what's inside? maybe that's your "boot drive"...
Code:
mkdir /mnt/test
mount /dev/sdc1 /mnt/test
find /mnt/test

you don't need to post the output here, just check if there are any boot related stuff in there.

not sure how you set up your "redirect" but if you're not booting the correct drive, then it's likely that you'll need to update your "redirect" or fix the booting problem.
 
Plenty of boot/grub/i386-pc/ stuff in /dev/sdc1

I guess that is not much of a pve issue... :)
I'll look into it then. Thanks for the help and getting me this far.
 
  • Like
Reactions: oguz
great. you're welcome!

once the new kernel is booted the error message should disappear. (it's actually just failing to fetch the i/o stats, because of the package versions it's trying to use cgroupv2 but you're running an older kernel which makes it falsely behave in cgroupv1 manner)
 
  • Like
Reactions: mikkelll
Man.. thanks again! As you predicted it would work when I updated it. - and it works great now! Instead of update-grub I did the following on the mounted drive ( + a reboot):
Code:
root@pve:/boot# cd /mnt/test/
root@pve:/mnt/test# grub-mkconfig -o boot/grub/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.4.44-1-pve
Found initrd image: /boot/initrd.img-5.4.44-1-pve
Found linux image: /boot/vmlinuz-4.15.18-30-pve
Found initrd image: /boot/initrd.img-4.15.18-30-pve
Found linux image: /boot/vmlinuz-4.15.18-10-pve
Found initrd image: /boot/initrd.img-4.15.18-10-pve
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
done
 
  • Like
Reactions: oguz
Man.. thanks again! As you predicted it would work when I updated it. - and it works great now! Instead of update-grub I did the following on the mounted drive ( + a reboot):
Code:
root@pve:/boot# cd /mnt/test/
root@pve:/mnt/test# grub-mkconfig -o boot/grub/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.4.44-1-pve
Found initrd image: /boot/initrd.img-5.4.44-1-pve
Found linux image: /boot/vmlinuz-4.15.18-30-pve
Found initrd image: /boot/initrd.img-4.15.18-30-pve
Found linux image: /boot/vmlinuz-4.15.18-10-pve
Found initrd image: /boot/initrd.img-4.15.18-10-pve
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
done

great, just remember that you will need to do this every time there's a kernel update (with this setup at least)
 
  • Like
Reactions: mikkelll

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!