Oh Daam !
I dont think it would work, but I shall try this on another host next week once I get my hands on another GPU
I got it working with Kernel 6.5.13.3. straightforward works
oh yes I did all of that... no luck.
but i have an AMD EPYC Server so iommu is enabled by default but i have still put amd_iommu=pt
but after enabling the VFs i get write errors
if i check iommu
dmesg | grep -e DMAR -e IOMMU
[ 3.467959] pci 0000:c0:00.2: AMD-Vi: IOMMU performance counters...
I had tried all this, exact same steps but my mdevctl types is empty
now i think the only way left is to go back to kernel 6.5 if this is not going to work
I confirm that on Proxmox 8.2.4 with Kernel 6.8.8.2-pve the 550.90.05 compiles successfully and nvidia-smi reports the card working but
mdevctl types output is empty even though its A40 GPU - vGPU Capable
Nobody with any ideas?
I even tried increasing the PG count to 512 but it strangely only sets it to an odd number 346 automatically
(I put 512 basis the Ceph PG Calculator)
rebuild speed continoues to be slow for over 2 days. With NVME drives this is very slow
Yes it will work but Ceph hates Raid cards and likes the disks exposed directly. not sure if your card is going to pass these disks as native disks like in IT Mode. I would keep all of them in IT Mode (convert the card to IT mode and not in RAID Mode, enough articles available google for it -...
I got a reply in another thread, its due to RBD Plugin not supporting offline export as TPM is handled by a different service and this will be handled in a future upcoming release.
Till such time we export the VM using backup and restore in new proxmox cluster.
NON TPM VM are getting migrated...
Hi
We have an EPYC 9554 based Cluster with 3x WD SN650 - NVME PCI-Gen4 15.36TB in each server. all backed by 100G Ethernet on Mellanox 2700 CX4 cards.
output of lspci ---
lspci | grep Ethernet
03:00.0 Ethernet controller: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA...
Hi Thomas,
Looks like your disk is ok, is inter vm copy on the same host ok ?
make a vm and try to copy within the vm's - it should be consistant at 100MBps approx as you have a 1Gbps card (without drops)
There is no problem.
its confussion of Bits and Bytes
Data transfer during copy is messured in Bytes - Big B.
Speed is messured in bits - small B
divide 960Mb/s by 8 to get the Bytes value = 120MB/s and you are getting 110 MB/s so its ok.
speed drop is a different problem and needs to be...
Hi Team,
Proxmox to Proxmox migration on old Proxmox RBD to new Proxmox RBD is supported now as I managed to do it. it works flawlessly. Moving VM from one cluster to another is so seamless that users will not even realize that they were moved from one cluster to another. Both clusters have...
I think RBD to RBD is supported now as I managed to do it just now for 1 VM.
both the clusters have CEPH running and the VM on RBD successfully migrated to a new proxmox cluster on CEPH again.
wonderful
but another VM with TPM state did not work - any ideas for this
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.