[SOLVED] Can't upgrade PVE 7.0 with udev error

wy3QkoM7Kk3666ou2vd

New Member
Apr 20, 2021
6
1
3
36
Hello!

I have been scratching my head with why I can not do a normal upgrade of my system. It started a couple of days ago until the point where I decided to begin with a clean install of PVE 7.0. But after having everything setup, the same problem comes up even before I do any changes on the server. Hence I can't update "right out of the box". I copied the "apt dist-upgrade" output and the recommended logs in the error description as well which can be found as attached files.

Some apt output:
Setting up udev (247.3-6) ...
Job for systemd-udevd.service failed because the control process exited with error code.
See "systemctl status systemd-udevd.service" and "journalctl -xe" for details.
invoke-rc.d: initscript udev, action "restart" failed.
● systemd-udevd.service - Rule-based Manager for Device Events and Files
Loaded: loaded (/lib/systemd/system/systemd-udevd.service; static)
Active: failed (Result: exit-code) since Thu 2021-08-05 16:00:50 CEST; 18ms ago
TriggeredBy: ● systemd-udevd-kernel.socket
● systemd-udevd-control.socket
Docs: man:systemd-udevd.service(8)
man:udev(7)
Process: 39547 ExecStart=/lib/systemd/systemd-udevd (code=exited, status=226/NAMESPACE)
Main PID: 39547 (code=exited, status=226/NAMESPACE)
CPU: 589us

Aug 05 16:00:50 proxmox systemd[1]: systemd-udevd.service: Scheduled restart job, restart counter is at 5.
Aug 05 16:00:50 proxmox systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Aug 05 16:00:50 proxmox systemd[1]: systemd-udevd.service: Start request repeated too quickly.
Aug 05 16:00:50 proxmox systemd[1]: systemd-udevd.service: Failed with result 'exit-code'.
Aug 05 16:00:50 proxmox systemd[1]: Failed to start Rule-based Manager for Device Events and Files.


This is a part of the journal output (don't know how relevant this is):
Aug 05 15:38:14 proxmox systemd[14624]: systemd-udevd.service: Failed to set up mount namespacing: /run/systemd/unit-root/proc/sys/kernel/domainname: No such file or directory
Aug 05 15:38:14 proxmox systemd[14624]: systemd-udevd.service: Failed at step NAMESPACE spawning /lib/systemd/systemd-udevd: No such file or directory

Any ideas?
 
is this PVE system installed on baremetal?
 
could you post the full journal since boot (journalctl -b)?
 
okay, so lxcfs crashes with a fuse error in addition to udev crashing because of lack of namespace support.. could you provide the output of mount? thanks!

something rather strange is going on - this is a clean PVE 7 install? did you enable access to a proxmox repository yet? if not, please do so and try to update to the newest PVE packages as well.
 
okay, so lxcfs crashes with a fuse error in addition to udev crashing because of lack of namespace support.. could you provide the output of mount? thanks!

something rather strange is going on - this is a clean PVE 7 install? did you enable access to a proxmox repository yet? if not, please do so and try to update to the newest PVE packages as well.
Sure thing, the output is attached to the original post.

Yes, I even downloaded the ISO again and used a different USB-stick for the installation and the only thing I do is commenting out the PVE Enterprise Repository. After that I proceed with 'apt update' followed by 'apt dist-upgrade' which results in the output attached above.

EDIT:

/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)

Has been appended to the mount command after having done the 'apt update' and 'apt dist-upgrade'.
 
Last edited:
you have two pools mounted over eachother (or more specifically, you have two pools with a dataset for / and BOTH are mounted). rectify that situation (e.g., by booting a live disk / using the installer iso shell, importing the second pool and setting canmount=off or a different mountpoint for that problematic dataset)
 
you have two pools mounted over eachother (or more specifically, you have two pools with a dataset for / and BOTH are mounted). rectify that situation (e.g., by booting a live disk / using the installer iso shell, importing the second pool and setting canmount=off or a different mountpoint for that problematic dataset)
Thank you for your help!
 
  • Like
Reactions: fabian
you have two pools mounted over eachother (or more specifically, you have two pools with a dataset for / and BOTH are mounted). rectify that situation (e.g., by booting a live disk / using the installer iso shell, importing the second pool and setting canmount=off or a different mountpoint for that problematic dataset)

I have a pool where I had the root file system as part of it. I now want to move it on another drive and just have the data on the pool. I'd like to temporarily mount the old root somewhere so I can copy over some configuration details and scripts, is that possible?

Is it possible to remove the old root parts and reuse the space?

Code:
# zfs get -r canmount
NAME                          PROPERTY  VALUE     SOURCE
rpool                         canmount  on        default
rpool/ROOT                    canmount  on        default
rpool/ROOT/pve-1              canmount  on        default
rpool/data                    canmount  on        default
rpool/data/subvol-200-disk-0  canmount  on        default
rpool/repos                   canmount  on        default
rpool/secdata                 canmount  on        default

# zfs list
NAME                           USED  AVAIL  REFER  MOUNTPOINT
rpool                         6.39T   763G   139K  /rpool
rpool/ROOT                    49.9G   763G   128K  /rpool/ROOT
rpool/ROOT/pve-1              49.9G   763G  49.9G  /
rpool/data                    14.2G   763G   128K  /rpool/data
rpool/data/subvol-200-disk-0  14.2G  23.8G  14.2G  /rpool/data/subvol-200-disk-0
rpool/repos                   2.47T   763G  2.47T  /rpool/repos
rpool/secdata                 3.86T   763G  3.86T  /rpool/secdata

# df -h
Filesystem                    Size  Used Avail Use% Mounted on
udev                           32G     0   32G   0% /dev
tmpfs                         6.3G  1.2M  6.3G   1% /run
rpool/ROOT/pve-1              813G   50G  763G   7% /
tmpfs                          32G   34M   32G   1% /dev/shm
tmpfs                         5.0M     0  5.0M   0% /run/lock
efivarfs                      128K   39K   85K  32% /sys/firmware/efi/efivars
/dev/sde1                     7.3T  5.4T  1.6T  78% /mnt/backup
rpool                         763G  256K  763G   1% /rpool
rpool/ROOT                    763G  128K  763G   1% /rpool/ROOT
rpool/data                    763G  128K  763G   1% /rpool/data
rpool/repos                   3.3T  2.5T  763G  77% /rpool/repos
rpool/data/subvol-200-disk-0   38G   15G   24G  38% /rpool/data/subvol-200-disk-0
/dev/fuse                     128M   20K  128M   1% /etc/pve
tmpfs                         6.3G     0  6.3G   0% /run/user/0
rpool/secdata                 4.7T  3.9T  763G  84% /rpool/secdata
 
Last edited:
yes, but to move the root file system is a bit complicated and you should know what you are doing..
 
yes, but to move the root file system is a bit complicated and you should know what you are doing..

I'm not trying to move the root filesystem.

I want to setup a new installation on a nvme drive, and import the existing zfs pool. The problem is, by doing that the old root files system on the zfs pool appears to cause some overlay problems as per the post I replied to. That is what I want to avoid.

The bonus question, after this has been resolved, can I mount the old root fs somewhere else so I can access old configurations, snapshots, etc, and eventually remove it.
 
Last edited:
Hi,
I want to setup a new installation on a nvme drive, and import the existing zfs pool. The problem is, by doing that the old root files system on the zfs pool appears to cause some overlay problems as per the post I replied to. That is what I want to avoid.

The bonus question, after this has been resolved, can I mount the old root fs somewhere else so I can access old configurations, snapshots, etc, and eventually remove it.
the mountpoint property for the ZFS datasets can be edited to something else (with zfs set). With that, you should be able to avoid the issue where two different datasets are mounted on top of each other. Of course you should only do it for the current root pool when you don't plan to boot from there anymore!
 
Last edited:
Hi,

the mountpoint property for the ZFS datasets can be edited to something else (with zfs set). With that, you should be able to avoid the issue where two different datasets are mounted on top of each other. Of course you should only do it for the current root pool when you don't plan to boot from there anymore!

Yes I understand that much, that it is the zfs set command that is needed, but I don't find any examples how to do it or page explaining this.
 
See man zfs-set.
 
Something like this?

zfs set mountpoint=/oldroot rpool/ROOT/pve-1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!