Should I dump em? I followed some guide somewhere on here awhile ago when i was on version 7 and it worked so didn't really think about it.
Sorted by ID:
IOMMU Group 14 c1:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:22bc] (rev a1)
IOMMU Group 15 c2:00.0 Non-Volatile memory...
Curious if it’s possible to dice up a 4x4 nvme card and pass only 2 of them. They come up as isolated ids so assume I could but proxmox crashes when I try and start the vm. It works if I pass all 4 then I can see all 4 on the vm but I want to assigned a pair if I can.
This card is vGPU capable actually, maybe not "supported". I was able to get it running. I also have a P4 and a T4 so let's try and be helpful instead of writing things off as "does not matter"...
What is the "display mode" exactly? I dont see much reference for that on the article. I'm using Ubuntu 22.04 Server with a P2200 and getting no output from:
nvidia-smi vgpu
No supported devices in vGPU mode
or
mdevctl types
You're the man. Thank you so much for this. Should be tagged or something.
There is one strange artifact however. When logging into the broken node that I ran the above on, it still has some metadata of the other node:
Cluster information
-------------------
Name: proxmox
Config Version: 3
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Tue Dec 7 22:32:26 2021
Quorum provider: corosync_votequorum
Nodes: 1
Node ID...
I ran this on the node that I can't login to via the UI:
Cluster information
-------------------
Name: proxmox
Config Version: 2
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Tue Dec 7 20:52:13 2021
Quorum provider...
I setup a second node and temporarily added it to a cluster. I then removed it via the docs and am trying to login to it as a standalone. I can ssh as root fine but I can't login as root under PAM via the console UI. I have tried the standard things like resetting the passwd and such but it...
After sorta getting bored, I wanted to try something new. And I wasn't super happy with a 3 node CEPH cluster even on a 10GBe storage backend. although the flexibility was very nice. Things still felt slightly laggy, so I decided to venture out. Prior to moving on I was running OMV 5.x and...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.