Search results

  1. A

    Reduction of HDD space of a VM

    RAW allocates the full space on the disk (file image) whereas qcow2 will gradually grow depending on the storage/images changes that took place inside the VM. The backup size depends on the type of backup and format you are using. I would suggest using PBS in the end if possible. One other...
  2. A

    GPU VRAM sharing

    Hi Dominik, Thanks for the reply. After some digging around, I could understand that the shared portion of VRAM is basically from the RAM itself, as some windows processes need also the GPU context switching in the WDDM driver, therefore that's not usable directly as video RAM memory nor can...
  3. A

    Reduction of HDD space of a VM

    If your VM is in a qcow2 format, shut it down and use the following command: > rsync -uitprvUHPS --no-W --inplace (source_path) (destination_path) Re-run it a couple of times after it finished. The resulting qcow2 file will have the current size and not the set one. Mind that latest versions...
  4. A

    GPU VRAM sharing

    Hello folks, I would be interested to find out if any of you using vGPU in your VMs, managed to get the VRAM sharing working. I tried scrubbing the forum and the Internet on this - if this is achievable or not, besides just allocating the usual nvidia specific profile sets for designated...
  5. A

    Nvidia vGPU mdev and live migration

    Indeed, P4 is supported as I could find before in the official docs. I have also tried using both non-patched and patched versions (from polloloco) on the 535.161.05 base gpu software driver. Before swapping the patched and non-patched version I used to uninstall the drivers gracefully, but the...
  6. A

    Nvidia vGPU mdev and live migration

    Hello, @dcsapak - Thank you for the tip on switching the kernel, I set-on the 6.2.11-2-pve kernel, rebuild the 535.161.05 with dkms driver, applied the unlock patch and got back on testing. I could see after reboot that dmesg shows: [nvidia-vgpu-vfio] 00000000-0000-0000-0000-000000008888...
  7. A

    Nvidia vGPU mdev and live migration

    Thanks for the tip, I'll give it a try and post back later this week the outcome. Any other clues on what prior versions to 535.161.05 might do the migration trick before swaping the 5.15 kernel branch?
  8. A

    Nvidia vGPU mdev and live migration

    Hello to all Did anybody managed to enable vfio live migration in 535.161.05 driver? I have tried to place both the old (NV_KVM_MIGRATION_UAPI=1) and the new flag (NV_VFIO_DEVICE_MIG_STATE_PRESENT=1) in the following files before install and dkms build ...
  9. A

    vzdump causes iSCSI connection lost on one server model with PVE 7, but not on another model

    Thanks for the hint on the bug mate. We do run jumbo inside our networks and have separate arista switches for "storage network" traffic with mlag, so..i'll poke the bat and check the link with the bug. Tell me... did you put qemu 5.2 on hold in apt , meaning running an system package upgrade...
  10. A

    vzdump causes iSCSI connection lost on one server model with PVE 7, but not on another model

    Also tested with pve-manager/7.1-6/4e61e21c (running kernel: 5.11.22-4-pve) and at 28% of taken backup Nov 28 03:12:44 ********** kernel: [ 72.880477] device tap444i0 entered promiscuous mode Nov 28 03:13:05 ********** kernel: [ 93.712445] connection1:0: detected conn error (1020) Nov...
  11. A

    vzdump causes iSCSI connection lost on one server model with PVE 7, but not on another model

    Hello, I am getting back with some more info on this. Indeed seems to be an issue with the new Proxmox release, as using hardware (previously installed with version 7 & fully upgraded - both OS and prox packages from pve-no-subscription repository), swaped OS harddrives, installed a fresh 6...
  12. A

    vzdump causes iSCSI connection lost on one server model with PVE 7, but not on another model

    Hi, I am getting similar issues with LVM and iSCSI connection from a 3 node cluster with latest installed (today) Prox 7 (pve-manager/7.1-6/4e61e21c (running kernel: 5.13.19-1-pve) and a TrueNas 12 storage (dell r510) via a clustered 10 Gbe fiber connection into some Arista switches. The same...
  13. A

    [SOLVED] Change password cloud-init does not work

    Just writing a quick post to the resize part issue, maybe, it would come useful someday to someone: So, 1st - install the cloud-guest-utils package in deb9 to have the growpart binary 2nd -in the cloud.cfg file add the following: bootcmd: - [ /usr/bin/growpart, "/dev/vda 1" ] Just tested it...
  14. A

    [SOLVED] Change password cloud-init does not work

    Hi. it doesn't work. Bumping into the same issue right now trying to make the template for deb9 with cloud init 20.1. So.. some idea would be using a custom script at start-up and add it via bootcmd into the cloud.cfg, which might work.
  15. A

    [SOLVED] Nvidia vGPU

    Hello, Is there any update on this topic concerning the prox 6.1-8/6.2 for nvidia grid k2 passing it to multiple kvm ? Thanks, Alex
  16. A

    htop show incorrect data inside LXC Container

    Thank you for replying back on my post. I did check further on the system to see if I have any other failed services in systemctl and I could find that lxcsfs.service was with a failed status due to the /var/lib/lxcfs/ not being empty once the node started. With all CT stopped i did a rm -rf...
  17. A

    htop show incorrect data inside LXC Container

    Hello, I know this thread is old and hasn't been updated since 2017, but I need to report that I am still experiencing the same strange setup even in 2020, so it's worth on giving the forum a shout on this and see if anybody else has this issue. I am using the following proxmox version...
  18. A

    Proxmox 5.4.6 CT or kernel limits - issue/bug on high CT number?

    Thanks for sharing, what about ceph? Is the loop issue present on ceph too? Alex.
  19. A

    Proxmox 5.4.6 CT or kernel limits - issue/bug on high CT number?

    Hello, I am returning with an update. I have reinstalled another d2950 server which was sitting in the closet, which has a pretty much configuration like the current one, where I have installed the proxmox 5.1-32 version from an old cd-rom I had. I have not run any kind of package upgrade on...