Indeed, P4 is supported as I could find before in the official docs.
I have also tried using both non-patched and patched versions (from polloloco) on the 535.161.05 base gpu software driver. Before swapping the patched and non-patched version I used to uninstall the drivers gracefully, but the...
Hello,
@dcsapak - Thank you for the tip on switching the kernel, I set-on the 6.2.11-2-pve kernel, rebuild the 535.161.05 with dkms driver, applied the unlock patch and got back on testing.
I could see after reboot that dmesg shows:
[nvidia-vgpu-vfio] 00000000-0000-0000-0000-000000008888...
Thanks for the tip, I'll give it a try and post back later this week the outcome.
Any other clues on what prior versions to 535.161.05 might do the migration trick before swaping the 5.15 kernel branch?
Hello to all
Did anybody managed to enable vfio live migration in 535.161.05 driver?
I have tried to place both the old (NV_KVM_MIGRATION_UAPI=1) and the new flag (NV_VFIO_DEVICE_MIG_STATE_PRESENT=1) in the following files before install and dkms build
...
Thanks for the hint on the bug mate.
We do run jumbo inside our networks and have separate arista switches for "storage network" traffic with mlag, so..i'll poke the bat and check the link with the bug.
Tell me... did you put qemu 5.2 on hold in apt , meaning running an system package upgrade...
Also tested with
pve-manager/7.1-6/4e61e21c (running kernel: 5.11.22-4-pve)
and at 28% of taken backup
Nov 28 03:12:44 ********** kernel: [ 72.880477] device tap444i0 entered promiscuous mode
Nov 28 03:13:05 ********** kernel: [ 93.712445] connection1:0: detected conn error (1020)
Nov...
Hello,
I am getting back with some more info on this.
Indeed seems to be an issue with the new Proxmox release, as using hardware (previously installed with version 7 & fully upgraded - both OS and prox packages from pve-no-subscription repository), swaped OS harddrives, installed a fresh 6...
Hi,
I am getting similar issues with LVM and iSCSI connection from a 3 node cluster with latest installed (today) Prox 7 (pve-manager/7.1-6/4e61e21c (running kernel: 5.13.19-1-pve) and a TrueNas 12 storage (dell r510) via a clustered 10 Gbe fiber connection into some Arista switches.
The same...
Just writing a quick post to the resize part issue, maybe, it would come useful someday to someone:
So,
1st - install the cloud-guest-utils package in deb9 to have the growpart binary
2nd -in the cloud.cfg file add the following:
bootcmd:
- [ /usr/bin/growpart, "/dev/vda 1" ]
Just tested it...
Hi. it doesn't work.
Bumping into the same issue right now trying to make the template for deb9 with cloud init 20.1.
So.. some idea would be using a custom script at start-up and add it via bootcmd into the cloud.cfg, which might work.
Thank you for replying back on my post.
I did check further on the system to see if I have any other failed services in systemctl and I could find that lxcsfs.service was with a failed status due to the /var/lib/lxcfs/ not being empty once the node started.
With all CT stopped i did a rm -rf...
Hello,
I know this thread is old and hasn't been updated since 2017, but I need to report that I am still experiencing the same strange setup even in 2020, so it's worth on giving the forum a shout on this and see if anybody else has this issue.
I am using the following proxmox version...
Hello,
I am returning with an update.
I have reinstalled another d2950 server which was sitting in the closet, which has a pretty much configuration like the current one, where I have installed the proxmox 5.1-32 version from an old cd-rom I had.
I have not run any kind of package upgrade on...
New update on the progress:
Added on the kernel's grub start args max_loop=255 and started the bare metal system in the try to increase the max loop devices on the system and overcome this limitation.
After, the container starting/stopping operations are taking considerable longer but what...
Thanks for the feedback.
I'm pretty sure that I'm not the only one trying to run high number of CT on a prox box.
Regarding the LVM, I haven't tried that but for plain and quick manipulation of image files I'd rather stay on file storage backend because on the current storage there is also...
Hello guys,
I have a dell 2950 server with one 120 ssd drive and a 1Tb 8 drives array in raid50 for local storage 64GB of RAM and 2xX5460 . I have just installed the latest version of pve in the attempt to run a test environment for client api calls emulation on a software development project...
Hey guys,
I am encountering the same error with a much up-2-date version of pve.
Environment details:
7 prox nodes running the same version
each prox node connects via dual 10 Gbit NIC with 2 switches and forms an MLAG Po
each traffic type (also cluster -corosync vlan) separated/designated...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.