Search results

  1. Y

    PVE 7.0.x and LXC USB passthrough issue

    Dear PVE users, Previously I was running smoothly a LXC container with USB passthrough on PVE 6.4.x. After an upgrade from PVE 6.4.x. to PVE 7.0.x this container cannot access anymore to the USB device. From PVE server: :~# test -w /dev/ttyACM0 && echo success || echo failure success :~# ls...
  2. Y

    Upgrade PVE 6.x to 7.x : Grub issues

    Hello Fabian, I've generated a VG backup using vgcfgbackup pve on both nodes and there are identical except, of course, the node name and the lvmid (PV & VG). I have no idea of why only one node upgrade gone wrong, so I've decided to do a fresh install of PVE 7.x on this node and it was a...
  3. Y

    Upgrade PVE 6.x to 7.x : Grub issues

    Hello @fabian I performed additional tests, I downgraded the grub version. After the 6.x to 7.x upgrade the grub version is grub2/testing 2.04-19 amd64 I downgraded the grub version to the PVE 6.x based on buster which worked well. grub2/now 2.02+dfsg1-20+deb10u4 amd64 I've added these...
  4. Y

    Upgrade PVE 6.x to 7.x : Grub issues

    Here is the grub.cfg file from the non-working node. It contains much less information than the grub.cf from a working node.
  5. Y

    Upgrade PVE 6.x to 7.x : Grub issues

    "Grub fails to find LVM volume after previous LV rename" seems to be different because I hadn't renamed or changed the LVM. Do you have any other tess or suggestions ? I hesitate to reinstall everything.
  6. Y

    Upgrade PVE 6.x to 7.x : Grub issues

    The auto-generated grub.cfg is really different on the non-working node compared to the working nodes. On the non-working node, the reboot remains ok but I'm not comfortable to use it with a potential grub failure.
  7. Y

    Upgrade PVE 6.x to 7.x : Grub issues

    Yes I think too. I've attached two files one from the non-working node and one from a working node.
  8. Y

    Upgrade PVE 6.x to 7.x : Grub issues

    :~# grub-probe --target=fs_uuid --device /dev/mapper/pve-root grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found. :~# grub-probe --target=partuuid --device /dev/mapper/pve-root grub-probe: error: disk...
  9. Y

    Upgrade PVE 6.x to 7.x : Grub issues

    Yes, update-grub2 produces errors and grub-probe too. :~# update-grub2 Generating grub configuration file ... Found linux image: /boot/vmlinuz-5.11.22-1-pve Found initrd image: /boot/initrd.img-5.11.22-1-pve /usr/sbin/grub-probe: error: disk...
  10. Y

    Upgrade PVE 6.x to 7.x : Grub issues

    Hello, Many thanks for your input, unfortunately I don't have any snapshot and I haven't found a solution. Regards,
  11. Y

    Upgrade PVE 6.x to 7.x : Grub issues

    Probably found the origin of this issue. On the non-working node, the /boot/grub/grub.cfg content is really different A lot of info is missing on the non-working node Extract for the non-working node ..... ..... function load_video { if [ x$feature_all_video_module = xy ]; then insmod...
  12. Y

    Upgrade PVE 6.x to 7.x : Grub issues

    Same results on a non-working node and a working node :~# grub-probe --target=device / /dev/mapper/pve-root :~# grub-probe --target=device /boot /dev/mapper/pve-root
  13. Y

    Upgrade PVE 6.x to 7.x : Grub issues

    :~# lvdisplay pve/root --- Logical volume --- LV Path /dev/pve/root LV Name root VG Name pve LV UUID YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa LV Write Access read/write LV Creation host, time proxmox, 2020-03-20...
  14. Y

    Upgrade PVE 6.x to 7.x : Grub issues

    lvs output from the non-working node ~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert osd-block-f52bce18-3afb-4f11-b380-80ddcdd0b3ef...
  15. Y

    Upgrade PVE 6.x to 7.x : Grub issues

    Perhaps a clue for the non-working node VG UUID is OK PV UUID looks different @fabian please let me know what you think ? YAGA :~# pvs -a PV VG Fmt Attr PSize PFree /dev/nvme0n1 ---...
  16. Y

    Upgrade PVE 6.x to 7.x : Grub issues

    Hello Fabian, Here is the info for a non-working node # lvmconfig --typeconfig full devices/global_filter global_filter="r|/dev/zd.*|" # cat /etc/default/grub /etc/default/grub.d/* # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full...
  17. Y

    Upgrade PVE 6.x to 7.x : Grub issues

    Hi Proxmox team and Proxmox users, As suggested by @fabian here is a new thread for this issue. Congratulations to the team for the Proxmox 7 release. I've upgraded a 4-node cluster with nvme ssd drive (nvme0n1) for filesystem and Ceph (sda, sdb) from the latest 6.x to 7.x. Nodes are...
  18. Y

    Proxmox VE 7.0 released!

    Hi Proxmox team and Proxmox users, Congratulations to the team for the Proxmox 7 release. I've upgraded a 4-node cluster with MVNE main drive and Ceph from the latest 6.x to 7. Nodes are identical : same hardware, same proxmox release, same configuration The first 3-node upgrade were ok but...
  19. Y

    How to prevent automatic apt upgrade during the first boot with cloud init ?

    Hi, Based on this Cloud Init examples https://cloudinit.readthedocs.io/en/latest/topics/examples.html#update-apt-database-on-first-boot I would to set : package_update: false package_upgrade: false I don't know how to do that with Proxmox. Many thanks for your help, Regards, YAGA