Search results

  1. mir

    Upgrading to pve-9.1: apparmor.service

    Hi, This is the result: # find /etc/apparmor.d/ /etc/apparmor.d/lxc -maxdepth 1 -type f -exec /sbin/apparmor_parser -N {} \; ch-run /usr/lib/NetworkManager/nm-dhcp-client.action /usr/lib/connman/scripts/dhclient-script /usr/{lib/NetworkManager,libexec}/nm-dhcp-helper /{,usr/}sbin/dhclient...
  2. mir

    Upgrading to pve-9.1: apparmor.service

    Reported on the bug tracker: https://bugzilla.proxmox.com/show_bug.cgi?id=7203
  3. mir

    [SOLVED] Connecting a Open Media Vault Server as storage

    If you check the Advanced box what version of NFS do you see? I guess you see 'Default' which is NFS version 3. I am pretty sure OMV uses NFS >= 4.0
  4. mir

    Upgrading to pve-9.1: apparmor.service

    apt reinstall apparmor libapparmor1:amd64 Summary: Upgrading: 0, Installing: 0, Reinstalling: 2, Removing: 0, Not Upgrading: 0 Download size: 755 kB Space needed: 0 B / 3,915 MB available Get:1 http://download.proxmox.com/debian/pve trixie/pve-no-subscription amd64...
  5. mir

    Upgrading to pve-9.1: apparmor.service

    Thanks. Got this result: conflicting flag values = nvidia_modprobe 4097nvidia_modprobe//kmod , 1 conflicting flags in the rule Running the command a second time: conflicting flag values = notepadqq 4097, 1 conflicting flags in the rule Each time a run the command it complains of a different...
  6. mir

    Upgrading to pve-9.1: apparmor.service

    pct start 103 --debug Has this interesting info:
  7. mir

    Upgrading to pve-9.1: apparmor.service

    Could it be related to this old bug? LXC container fails to start without any reason
  8. mir

    Upgrading to pve-9.1: apparmor.service

    It seems the problem is a lxc container with a mount point which is causing the problems. Attached is the debug log from starting the container like this: lxc-start -n 103 -F -lDEBUG -o /tmp/103.log
  9. mir

    Upgrading to pve-9.1: apparmor.service

    What does the following mean? $ sudo systemctl status apparmor.service × apparmor.service - Load AppArmor profiles Loaded: loaded (/usr/lib/systemd/system/apparmor.service; enabled; preset: enabled) Active: failed (Result: exit-code) since Sat 2026-01-03 02:11:52 CET; 12min ago...
  10. mir

    [solved] upgrade from pve 8.4.16 to pve 9.1 failed

    The devil surely is in the details ;) The little pin which made the castle crumple was the following: A mount flag on the root partition which apparently triggers a regression in systemd-remount-fs.service. It fails if the following mount flag is present on a root ext4 filesystem: nodelalloc...
  11. mir

    [solved] upgrade from pve 8.4.16 to pve 9.1 failed

    Pinning 6.8.12-17-pve for now until hopefully someone finds a solution. proxmox-boot-tool kernel pin 6.8.12-17-pve
  12. mir

    [solved] upgrade from pve 8.4.16 to pve 9.1 failed

    I have tried these kernel also with same poor result: proxmox-kernel-6.14.8-2-pve-signed proxmox-kernel-6.14.11-5-pve-signed proxmox-kernel-6.17.2-1-pve-signed BTW. AMD Opteron(tm) Processor 3365 does support aes: lscpu |grep aes Flags: fpu vme de pse tsc msr pae...
  13. mir

    [solved] upgrade from pve 8.4.16 to pve 9.1 failed

    Hi all, Trying upgrading my server from 8.4 to 9.1 but after the upgrade the server fails somehow and starts the server with root in ro mode. Starting same server with the 8.4 kernel and everything works as expected. Old: pve-manager/8.4.16/368e3c45c15b895c (running kernel: 6.8.12-17-pve) New...
  14. mir

    Snapshots with Shared iscsi Storage

    It is explained here: https://pve.proxmox.com/pve-docs-9-beta/chapter-pvesm.html#storage_lvm Simply put, you need to manually add the option in the storage.cfg file yourself. eg: A LUN exported from a Qnap server iscsi: qnap portal 172.16.2.10 target...
  15. mir

    x520-da2 fiber card purchased on aliexpress

    Maybe you have hit the same error as a user on Truenas Scale (Also based on Debian 12): https://www.truenas.com/community/threads/intel-82599es-10g-sfp-card-fails-to-probe.114256/
  16. mir

    HA light or for the poor

    If you have two nodes you can enable HA using a qdisk as a witness. See options discussed here: https://www.youtube.com/watch?v=TXFYTQKYlno
  17. mir

    ZFS over iSCSI - Increase disk size

    An unmount before resizing would have fixed the problem. A mounted disk can cause all various kinds of errors. It could easily have been caused by an automatic rescan of the hostbus which interfered with a filesystem with data waiting to be persisted to disk.
  18. mir

    ZFS over iSCSI - Increase disk size

    The safes way to expand a disk is to unmount it and then do the resize and expansion of the filesystem. After this simply mount it again. It is all explained here: https://pve.proxmox.com/wiki/Resize_disks If the disk resizing is not automatically discovered by the VM you will have to initiate a...
  19. mir

    PCI Video card recommendation for GPU Passthrough

    Intel Flex 170 see: https://www.youtube.com/watch?v=aYcntiF4j2Q&t=2s and: https://www.youtube.com/watch?v=tLK_i-TQ3kQ&t=919s
  20. mir

    [TUTORIAL] Guide: Setup ZFS-over-iSCSI with PVE 5x and FreeNAS 11+

    I am now very happy I never upgrade before the first point release is available ;) I hope for native support in Truenas Scale so we can skip the plugin entirely and avoid the patching of Proxmox.