Recent content by avladulescu

  1. A

    Nvidia vGPU mdev and live migration

    Indeed, P4 is supported as I could find before in the official docs. I have also tried using both non-patched and patched versions (from polloloco) on the 535.161.05 base gpu software driver. Before swapping the patched and non-patched version I used to uninstall the drivers gracefully, but the...
  2. A

    Nvidia vGPU mdev and live migration

    Hello, @dcsapak - Thank you for the tip on switching the kernel, I set-on the 6.2.11-2-pve kernel, rebuild the 535.161.05 with dkms driver, applied the unlock patch and got back on testing. I could see after reboot that dmesg shows: [nvidia-vgpu-vfio] 00000000-0000-0000-0000-000000008888...
  3. A

    Nvidia vGPU mdev and live migration

    Thanks for the tip, I'll give it a try and post back later this week the outcome. Any other clues on what prior versions to 535.161.05 might do the migration trick before swaping the 5.15 kernel branch?
  4. A

    Nvidia vGPU mdev and live migration

    Hello to all Did anybody managed to enable vfio live migration in 535.161.05 driver? I have tried to place both the old (NV_KVM_MIGRATION_UAPI=1) and the new flag (NV_VFIO_DEVICE_MIG_STATE_PRESENT=1) in the following files before install and dkms build ...
  5. A

    vzdump causes iSCSI connection lost on one server model with PVE 7, but not on another model

    Thanks for the hint on the bug mate. We do run jumbo inside our networks and have separate arista switches for "storage network" traffic with mlag, so..i'll poke the bat and check the link with the bug. Tell me... did you put qemu 5.2 on hold in apt , meaning running an system package upgrade...
  6. A

    vzdump causes iSCSI connection lost on one server model with PVE 7, but not on another model

    Also tested with pve-manager/7.1-6/4e61e21c (running kernel: 5.11.22-4-pve) and at 28% of taken backup Nov 28 03:12:44 ********** kernel: [ 72.880477] device tap444i0 entered promiscuous mode Nov 28 03:13:05 ********** kernel: [ 93.712445] connection1:0: detected conn error (1020) Nov...
  7. A

    vzdump causes iSCSI connection lost on one server model with PVE 7, but not on another model

    Hello, I am getting back with some more info on this. Indeed seems to be an issue with the new Proxmox release, as using hardware (previously installed with version 7 & fully upgraded - both OS and prox packages from pve-no-subscription repository), swaped OS harddrives, installed a fresh 6...
  8. A

    vzdump causes iSCSI connection lost on one server model with PVE 7, but not on another model

    Hi, I am getting similar issues with LVM and iSCSI connection from a 3 node cluster with latest installed (today) Prox 7 (pve-manager/7.1-6/4e61e21c (running kernel: 5.13.19-1-pve) and a TrueNas 12 storage (dell r510) via a clustered 10 Gbe fiber connection into some Arista switches. The same...
  9. A

    [SOLVED] Change password cloud-init does not work

    Just writing a quick post to the resize part issue, maybe, it would come useful someday to someone: So, 1st - install the cloud-guest-utils package in deb9 to have the growpart binary 2nd -in the cloud.cfg file add the following: bootcmd: - [ /usr/bin/growpart, "/dev/vda 1" ] Just tested it...
  10. A

    [SOLVED] Change password cloud-init does not work

    Hi. it doesn't work. Bumping into the same issue right now trying to make the template for deb9 with cloud init 20.1. So.. some idea would be using a custom script at start-up and add it via bootcmd into the cloud.cfg, which might work.
  11. A

    [SOLVED] Nvidia vGPU

    Hello, Is there any update on this topic concerning the prox 6.1-8/6.2 for nvidia grid k2 passing it to multiple kvm ? Thanks, Alex
  12. A

    htop show incorrect data inside LXC Container

    Thank you for replying back on my post. I did check further on the system to see if I have any other failed services in systemctl and I could find that lxcsfs.service was with a failed status due to the /var/lib/lxcfs/ not being empty once the node started. With all CT stopped i did a rm -rf...
  13. A

    htop show incorrect data inside LXC Container

    Hello, I know this thread is old and hasn't been updated since 2017, but I need to report that I am still experiencing the same strange setup even in 2020, so it's worth on giving the forum a shout on this and see if anybody else has this issue. I am using the following proxmox version...
  14. A

    Proxmox 5.4.6 CT or kernel limits - issue/bug on high CT number?

    Thanks for sharing, what about ceph? Is the loop issue present on ceph too? Alex.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!