Search results

  1. D

    NVIDIA vGPU + Grid on Proxmox 8

    Oh Daam ! I dont think it would work, but I shall try this on another host next week once I get my hands on another GPU I got it working with Kernel 6.5.13.3. straightforward works
  2. D

    NVIDIA vGPU + Grid on Proxmox 8

    oh yes I did all of that... no luck. but i have an AMD EPYC Server so iommu is enabled by default but i have still put amd_iommu=pt but after enabling the VFs i get write errors if i check iommu dmesg | grep -e DMAR -e IOMMU [ 3.467959] pci 0000:c0:00.2: AMD-Vi: IOMMU performance counters...
  3. D

    NVIDIA vGPU + Grid on Proxmox 8

    I had tried all this, exact same steps but my mdevctl types is empty now i think the only way left is to go back to kernel 6.5 if this is not going to work
  4. D

    NVIDIA vGPU + Grid on Proxmox 8

    I confirm that on Proxmox 8.2.4 with Kernel 6.8.8.2-pve the 550.90.05 compiles successfully and nvidia-smi reports the card working but mdevctl types output is empty even though its A40 GPU - vGPU Capable
  5. D

    Ceph Add capacity - Slow rebalance

    Nobody with any ideas? I even tried increasing the PG count to 512 but it strangely only sets it to an odd number 346 automatically (I put 512 basis the Ceph PG Calculator) rebuild speed continoues to be slow for over 2 days. With NVME drives this is very slow
  6. D

    Hardware for Proxmox Cluster with CEPH Storage

    Yes it will work but Ceph hates Raid cards and likes the disks exposed directly. not sure if your card is going to pass these disks as native disks like in IT Mode. I would keep all of them in IT Mode (convert the card to IT mode and not in RAID Mode, enough articles available google for it -...
  7. D

    Proxmox to Proxmox Cluster Migration

    https://forum.proxmox.com/threads/remote-migration-of-cold-vm-eerror-no-export-formats-for-vm-pool-vm-9999-disk-0-check-storage-plugin-support.123048/#post-678088
  8. D

    Proxmox to Proxmox Cluster Migration

    I got a reply in another thread, its due to RBD Plugin not supporting offline export as TPM is handled by a different service and this will be handled in a future upcoming release. Till such time we export the VM using backup and restore in new proxmox cluster. NON TPM VM are getting migrated...
  9. D

    Ceph Add capacity - Slow rebalance

    Hi We have an EPYC 9554 based Cluster with 3x WD SN650 - NVME PCI-Gen4 15.36TB in each server. all backed by 100G Ethernet on Mellanox 2700 CX4 cards. output of lspci --- lspci | grep Ethernet 03:00.0 Ethernet controller: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA...
  10. D

    Remote migration of cold VM - EERROR: no export formats for 'VM-pool:vm-9999-disk-0' - check storage plugin support!

    qm config 9997 ----------------------- agent: 1 balloon: 0 bios: seabios boot: order=scsi0;ide0;net0 cores: 4 cpu: host description: Server 2022 ide0: none,media=cdrom machine: pc-q35-8.1 memory: 16384 meta: creation-qemu=8.1.2,ctime=1705037235 name: Win2022-Office net0...
  11. D

    Slow read from WinServer

    I don't know, but I hate the RAID Cards on HP Servers, they are bad in my view
  12. D

    Slow read from WinServer

    Hi Thomas, Looks like your disk is ok, is inter vm copy on the same host ok ? make a vm and try to copy within the vm's - it should be consistant at 100MBps approx as you have a 1Gbps card (without drops)
  13. D

    Slow read from WinServer

    give the screenshot of - crystaldiskmark output in the vm and hdparm -Tt /dev/sda if you dont have hdparm do apt install hdparm -y
  14. D

    Slow read from WinServer

    move the vm to a different disk - like the HDD and try to do the same there too and update us regarding the speed there
  15. D

    Slow read from WinServer

    There is no problem. its confussion of Bits and Bytes Data transfer during copy is messured in Bytes - Big B. Speed is messured in bits - small B divide 960Mb/s by 8 to get the Bytes value = 120MB/s and you are getting 110 MB/s so its ok. speed drop is a different problem and needs to be...
  16. D

    Proxmox to Proxmox Cluster Migration

    Hi Team, Proxmox to Proxmox migration on old Proxmox RBD to new Proxmox RBD is supported now as I managed to do it. it works flawlessly. Moving VM from one cluster to another is so seamless that users will not even realize that they were moved from one cluster to another. Both clusters have...
  17. D

    Remote migration of cold VM - EERROR: no export formats for 'VM-pool:vm-9999-disk-0' - check storage plugin support!

    I think RBD to RBD is supported now as I managed to do it just now for 1 VM. both the clusters have CEPH running and the VM on RBD successfully migrated to a new proxmox cluster on CEPH again. wonderful but another VM with TPM state did not work - any ideas for this

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!