Proxmox VE 7.1 released!

Good day,

Upon updating, I've found out that my linux VMs will randomly freeze when doing a write-heavy operation (apt update or installation). This seems to be random. Is there a way to fix it at the moment? (the latest kernel...)

As an addon : we're mostly running VirtIO SCSI.
 
Last edited:
Good day,

Upon updating, I've found out that my linux VMs will randomly freeze when doing a write-heavy operation (apt update or installation). This seems to be random. Is there a way to fix it at the moment? (the latest kernel...)

As an addon : we're mostly running VirtIO SCSI.
Are the VMs with the issue virtio scsi or virtio block?
 
You can try downgrading qemu as the new qemu 6.1 that has shipped with PVE7.1 maybe what's causing you issues

Code:
# Download old qemu
wget http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-qemu-kvm_6.0.0-4_amd64.deb

# Install old qemu
dpkg -i pve-qemu-kvm_6.0.0-4_amd64.deb

# please package on hold until we know its fixed in 6.1
apt-mark hold pve-qemu-kvm
 
I am seeing same sort of issues on a clean 7.1.7 install. What I am experiencing is the pve freezing while transferring data to a debian vm running samba. It may run for 5 minutes, it may run for 1/2 hour. It freezes, and I have not seen anything of value in the log. perhaps I am looking in the wrong location. I am using virtIO scsi on a raid controller.
 
You can try downgrading qemu as the new qemu 6.1 that has shipped with PVE7.1 maybe what's causing you issues

Code:
# Download old qemu
wget http://download.proxmox.com/debian/dists/bullseye/pve-no-subscription/binary-amd64/pve-qemu-kvm_6.0.0-4_amd64.deb

# Install old qemu
dpkg -i pve-qemu-kvm_6.0.0-4_amd64.deb

# please package on hold until we know its fixed in 6.1
apt-mark hold pve-qemu-kvm
Done. For now I'll see if I get the same issues as before, as its difficult to replicate (though anything maxing out I/O seems to work well)
 
update
the down grade of qemu to 6.0.0-4 did not improvethe system, as it froze again, the only improvement is, it took longer to do so. back to the drawing board or pve 6.4
 
I am use last version qemu-server 7.1-3 and kernel 5.11.22-7-pve and virtual machines run stably (VirtO SCSI). I don't dare to switch to the latest kernel :)
 
I don't know where to go correctly, write here or start a new topic, I apologize in advance.
I installed pve on top of debian. I started several lxcs, everything was fine, then during the setup process I updated the system (apt upgrade), after restarting, lxcfs broke down.
The symptoms are as follows:
LXC can`t start
safe_mount: 1198 Operation not permitted - Failed to mount "proc" onto "/usr/lib/x86_64-linux-gnu/lxc/rootfs/proc"
lxc_mount_auto_mounts: 782 Operation not permitted - Failed to mount "proc" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/proc" with flags 14
lxc_setup: 3636 Failed to setup first automatic mounts
do_start: 1265 Failed to setup container "105"
sync_wait: 36 An error occurred in another process (expected sequence number 5)
__lxc_start: 2073 Failed to spawn container "105"
TASK ERROR: startup for container '105' failed

after a little digging , I found:
Bash:
# ls -la /var/lib/lxcfs/
ls: невозможно получить доступ к '/var/lib/lxcfs/cgroup': Ошибка ввода/вывода
итого 4
drwxr-xr-x  2 root root    0 дек  1 15:10 .
drwxr-xr-x 40 root root 4096 дек  1 14:45 ..
??????????  ? ?    ?       ?            ? cgroup
dr-xr-xr-x  2 root root    0 дек  1 15:10 proc
dr-xr-xr-x  2 root root    0 дек  1 15:10 sys


Bash:
# lxcfs -d -l /var/lib/lxcfs
Running constructor lxcfs_init to reload liblxcfs
mount namespace: 5
hierarchies:
  0: fd:   6: cpuset,cpu,io,memory,hugetlb,pids,rdma,misc
Kernel supports pidfds
api_extensions:
- cgroups
- sys_cpu_online
- proc_cpuinfo
- proc_diskstats
- proc_loadavg
- proc_meminfo
- proc_stat
- proc_swaps
- proc_uptime
- shared_pidns
- cpuview_daemon
- loadavg_daemon
- pidfds
FUSE library version: 2.9.9
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 2, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.34
flags=0x33fffffb
max_readahead=0x00020000
   INIT: 7.19
   flags=0x00000011
   max_readahead=0x00020000
   max_write=0x00020000
   max_background=0
   congestion_threshold=0
   unique: 2, success, outsize: 40
unique: 4, opcode: LOOKUP (1), nodeid: 1, insize: 47, pid: 5907
LOOKUP /cgroup
getattr /cgroup
   unique: 4, error: -5 (Input/output error), outsize: 16
unique: 6, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 5932
getattr /
   unique: 6, success, outsize: 120
unique: 8, opcode: GETXATTR (22), nodeid: 1, insize: 65, pid: 5932
   unique: 8, error: -38 (Function not implemented), outsize: 16
unique: 10, opcode: OPENDIR (27), nodeid: 1, insize: 48, pid: 5932
opendir flags: 0x18800 /
   opendir[0] flags: 0x18800 /
   unique: 10, success, outsize: 32
unique: 12, opcode: READDIR (28), nodeid: 1, insize: 80, pid: 5932
readdir[0] from 0
   unique: 12, success, outsize: 176
unique: 14, opcode: LOOKUP (1), nodeid: 1, insize: 45, pid: 5932
LOOKUP /proc
getattr /proc
   NODEID: 2
   unique: 14, success, outsize: 144
unique: 16, opcode: LOOKUP (1), nodeid: 1, insize: 44, pid: 5932
LOOKUP /sys
getattr /sys
   NODEID: 3
   unique: 16, success, outsize: 144
unique: 18, opcode: LOOKUP (1), nodeid: 1, insize: 47, pid: 5932
LOOKUP /cgroup
getattr /cgroup
   unique: 18, error: -5 (Input/output error), outsize: 16
unique: 20, opcode: READDIR (28), nodeid: 1, insize: 80, pid: 5932
   unique: 20, success, outsize: 16
unique: 22, opcode: RELEASEDIR (29), nodeid: 1, insize: 64, pid: 0
releasedir[0] flags: 0x0
   unique: 22, success, outsize: 16



Bash:
# pveversion
pve-manager/7.1-7/df5740ad (running kernel: 5.13.19-1-pve)
# lxcfs -v
4.0.8

I'm desperate, a little more and I'll start deploying again. Tell me where else to look for errors?
 
Last edited:
Generally lxcfs should get restarted, only reloaded. If you restart it (or start it manually like you did for debugging), all containers need to be restarted.
Did you get any errors during the upgrade process?
Do you see any journal messages when trying to start a container?
 
Sorry my english
after restarting = reboot host PC
Errors during the upgrade was not.
But I was wrong, upgrade was after lxcfs broke down, now i`m see log dpkg.log and message.
The last thing I installed before rebooting (2021-11-30 13:03) (apt-get -y install libc6:i386):
Code:
2021-11-29 13:43:21 status half-configured pve-ha-manager:amd64 3.3-1
2021-11-29 13:43:21 status installed pve-ha-manager:amd64 3.3-1
2021-11-30 13:03:16 startup archives install
2021-11-30 13:03:16 install haspd:amd64 <none> 7.90-eter2debian
2021-11-30 13:03:16 status half-installed haspd:amd64 7.90-eter2debian
2021-11-30 13:03:16 status triggers-pending man-db:amd64 2.9.4-2
2021-11-30 13:03:16 status unpacked haspd:amd64 7.90-eter2debian
2021-11-30 13:03:16 trigproc man-db:amd64 2.9.4-2 <none>
2021-11-30 13:03:16 status half-configured man-db:amd64 2.9.4-2
2021-11-30 13:03:16 status installed man-db:amd64 2.9.4-2
2021-11-30 13:03:39 startup archives unpack
2021-11-30 13:03:39 install libc6-i386:amd64 <none> 2.31-13+deb11u2
2021-11-30 13:03:39 status triggers-pending libc-bin:amd64 2.31-13+deb11u2
2021-11-30 13:03:39 status half-installed libc6-i386:amd64 2.31-13+deb11u2
2021-11-30 13:03:40 status unpacked libc6-i386:amd64 2.31-13+deb11u2
2021-11-30 13:03:40 startup packages configure
2021-11-30 13:03:40 configure libc6-i386:amd64 2.31-13+deb11u2 <none>
2021-11-30 13:03:40 status unpacked libc6-i386:amd64 2.31-13+deb11u2
2021-11-30 13:03:40 status half-configured libc6-i386:amd64 2.31-13+deb11u2
2021-11-30 13:03:40 status installed libc6-i386:amd64 2.31-13+deb11u2
2021-11-30 13:03:40 configure haspd:amd64 7.90-eter2debian <none>
2021-11-30 13:03:40 status unpacked haspd:amd64 7.90-eter2debian
2021-11-30 13:03:40 status half-configured haspd:amd64 7.90-eter2debian
2021-11-30 13:03:42 status installed haspd:amd64 7.90-eter2debian
2021-11-30 13:03:42 trigproc libc-bin:amd64 2.31-13+deb11u2 <none>
2021-11-30 13:03:42 status half-configured libc-bin:amd64 2.31-13+deb11u2
2021-11-30 13:03:42 status installed libc-bin:amd64 2.31-13+deb11u2
2021-11-30 13:03:50 startup archives install
2021-11-30 13:03:50 upgrade haspd:amd64 7.90-eter2debian 7.90-eter2debian
2021-11-30 13:03:50 status half-configured haspd:amd64 7.90-eter2debian
2021-11-30 13:03:52 status unpacked haspd:amd64 7.90-eter2debian
2021-11-30 13:03:52 status half-installed haspd:amd64 7.90-eter2debian
2021-11-30 13:03:52 status triggers-pending man-db:amd64 2.9.4-2
2021-11-30 13:03:52 status unpacked haspd:amd64 7.90-eter2debian
2021-11-30 13:03:52 configure haspd:amd64 7.90-eter2debian 7.90-eter2debian
2021-11-30 13:03:52 status half-configured haspd:amd64 7.90-eter2debian
2021-11-30 13:03:53 status installed haspd:amd64 7.90-eter2debian
2021-11-30 13:03:53 trigproc man-db:amd64 2.9.4-2 <none>
2021-11-30 13:03:53 status half-configured man-db:amd64 2.9.4-2
2021-11-30 13:03:53 status installed man-db:amd64 2.9.4-2
2021-11-30 14:40:53 startup archives unpack
2021-11-30 14:40:53 upgrade libpve-http-server-perl:all 4.0-3 4.0-4
 
Bash:
o# lxc-start -n 105 -F --logfile=lxc_105.log --logpriority=debug
lxc-start: 105: utils.c: safe_mount: 1198 Operation not permitted - Failed to mount "proc" onto "/usr/lib/x86_64-linux-gnu/lxc/rootfs/proc"
lxc-start: 105: conf.c: lxc_mount_auto_mounts: 782 Operation not permitted - Failed to mount "proc" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/proc" with flags 14
lxc-start: 105: conf.c: lxc_setup: 3636 Failed to setup first automatic mounts
lxc-start: 105: start.c: do_start: 1265 Failed to setup container "105"
lxc-start: 105: sync.c: sync_wait: 36 An error occurred in another process (expected sequence number 5)
lxc-start: 105: start.c: __lxc_start: 2073 Failed to spawn container "105"
lxc-start: 105: tools/lxc_start.c: main: 308 The container failed to start
lxc-start: 105: tools/lxc_start.c: main: 313 Additional information can be obtained by setting the --logfile and --logpriority options

lxc_105.log
 
Seems like it still isn't safe to upgrade to 7.1? Anyone that has done the upgrade NOT having disk issues?
 
Anyone that has done the upgrade NOT having disk issues?
me :-)

but due to the many messages here to this problem we only have migrated a few VMs already to the new 7.1; but all of them without any errors so far.
 
but due to the many messages here to this problem we only have migrated a few VMs already to the new 7.1; but all of them without any errors so far.
What is your vm config? What IO drivers, what OS?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!