Search results

  1. D

    Sizing for an HA cluster with Ceph

    Do let me know the inter-VM performance you get, try a iperf3 between 2 VM. a. on the same host, b. across VM on different hosts. We have a similar setup with Dual 100G for CEPH, 10G for LAN. We too run EPYC 64 Core x2 = 128 Cores and 2TB of RAM on each host. We get only 2-3 Gbps for...
  2. D

    Change disk type (IDE/SATA to SCSI) for existing Windows machine

    Oh yes I forgot to add this, This will help many
  3. D

    [TUTORIAL] NVIDIA vGPU on Proxmox VE 7.x

    Where.. It's simple. Just make an account on the NVIDIA site; it's available for free download. I was looking in the wrong place earlier. it's on the 2nd or 3rd page.
  4. D

    [TUTORIAL] NVIDIA vGPU on Proxmox VE 7.x

    Thanks, it worked... I have the required setup up and running
  5. D

    [TUTORIAL] NVIDIA vGPU on Proxmox VE 7.x

    got the driver up and running, enabled iommu etc... installed mdevctl too but no types listed root@pve:/# mdevctl types root@pve:/# root@pve:/# nvidia-smi vgpu Fri Dec 2 15:12:34 2022 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 510.108.03...
  6. D

    [TUTORIAL] NVIDIA vGPU on Proxmox VE 7.x

    Got the download... will try and get it up and running..., will update once i have some progress
  7. D

    [TUTORIAL] NVIDIA vGPU on Proxmox VE 7.x

    Hi @dcsapak, the problem is that i have downloaded the latest driver only - from the below (see attached screenshot) we get this driver...
  8. D

    [TUTORIAL] NVIDIA vGPU on Proxmox VE 7.x

    Can anybody help me. I am running AMD Ryzen 7950X with A5000 and have installed Proxmox 7.3 and have done the latest updates I have followed the above, installed the VIFO Modules but when I run the nvidia Host driver i get a error ERROR: Failed to run `/usr/sbin/dkms build -m nvidia -v...
  9. D

    Timeout while starting VM / Snapshotting disk etc...

    Hello, We have a 4 Node Proxmox Cluster with HPE DL385 Gen 10 Servers. AMD EPYC 7002 Series CPU, 100G Mellanox Cards, 100G Switch, WD SN640 NVME Enterprise SSD and 1-2TB of RAM per host. So the Hardware seems to be pretty solid and robust. we keep the proxmox hosts updated. Since the past...
  10. D

    Change disk type (IDE/SATA to SCSI) for existing Windows machine

    a easy workaround add a disk with virtio or scsi - this way you will get the scsi controller in windows, install the drivers and let windows find the 2nd disk in disk manager once this is done shutdown and change the primary disk (on ide) to scsi (by detaching and re-attaching as scsi) now...
  11. D

    How to get audio working via VNC to windows 10 VM?

    use spice as the display adaptor and add an audio device to the vm. then using spice you can get audio. it works, ensure you use the latest version of virtviewer like 11
  12. D

    Extremely Slow PBS Speeds

    how does one get these values Uploaded 1567 chunks in 5 seconds. Time per request: 3193 microseconds. TLS speed: 1313.30 MB/s SHA256 speed: 573.20 MB/s Compression speed: 862.16 MB/s Decompress speed: 1508.35 MB/s AES256/GCM speed: 2867.33 MB/s Verify speed: 412.48 MB/s
  13. D

    High Volume VM Backup

    Hi, Congrats on a great release, PBS 2.1 rocks. We have a customer who is planning 500-1000 VMs using many hosts on CEPH using NVMe Enterprise drives. Now, if the storage capacity is going to be in excess of 200-300 TB of NVMe, a daily incremental backup could be very time-consuming using HDD...
  14. D

    PBS API Calls

    Firstly, congrats on another great release of PBS. With the native Proxmox integration, PBS is a killer. The file restore function is such a boon as the ability to restore a single or a set of files from a VM backup of a particular date is a killer. Now, if we can have this list available...
  15. D

    Slow 100G network performance

    ok let me re-check with iperf2
  16. D

    Automation system for proxmox

    Custom Development, we to tried them and gave up a few years ago
  17. D

    Slow 100G network performance

    Hi Folks, We have 100G Ethernet (Mellanox HP SFP28 100G Cards) on each Node of our Proxmox Nodes (Running AMD EPYC Processors, so PCI-E Bandwidth is not a problem) We have 6.4TB SN630 Enterprise NVME SSD in the nodes on a CEPH Cluster so this is not a bottleneck either as its on the 100G Link...
  18. D

    Compile Source

    Thanks Greetz, But how do you know which is under testing code and what is enterprise level tested code since its under constant development
  19. D

    Compile Source

    We want to do some customization to Proxmox for which we are willing to compile the sources. If we take the enterprise subscription then will the sources to the enterprise repo be available for compiling