Search results

  1. D

    Proxmox to Proxmox Cluster Migration

    I got a reply in another thread, its due to RBD Plugin not supporting offline export as TPM is handled by a different service and this will be handled in a future upcoming release. Till such time we export the VM using backup and restore in new proxmox cluster. NON TPM VM are getting migrated...
  2. D

    Ceph Add capacity - Slow rebalance

    Hi We have an EPYC 9554 based Cluster with 3x WD SN650 - NVME PCI-Gen4 15.36TB in each server. all backed by 100G Ethernet on Mellanox 2700 CX4 cards. output of lspci --- lspci | grep Ethernet 03:00.0 Ethernet controller: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA...
  3. D

    Remote migration of cold VM - EERROR: no export formats for 'VM-pool:vm-9999-disk-0' - check storage plugin support!

    qm config 9997 ----------------------- agent: 1 balloon: 0 bios: seabios boot: order=scsi0;ide0;net0 cores: 4 cpu: host description: Server 2022 ide0: none,media=cdrom machine: pc-q35-8.1 memory: 16384 meta: creation-qemu=8.1.2,ctime=1705037235 name: Win2022-Office net0...
  4. D

    Slow read from WinServer

    I don't know, but I hate the RAID Cards on HP Servers, they are bad in my view
  5. D

    Slow read from WinServer

    Hi Thomas, Looks like your disk is ok, is inter vm copy on the same host ok ? make a vm and try to copy within the vm's - it should be consistant at 100MBps approx as you have a 1Gbps card (without drops)
  6. D

    Slow read from WinServer

    give the screenshot of - crystaldiskmark output in the vm and hdparm -Tt /dev/sda if you dont have hdparm do apt install hdparm -y
  7. D

    Slow read from WinServer

    move the vm to a different disk - like the HDD and try to do the same there too and update us regarding the speed there
  8. D

    Slow read from WinServer

    There is no problem. its confussion of Bits and Bytes Data transfer during copy is messured in Bytes - Big B. Speed is messured in bits - small B divide 960Mb/s by 8 to get the Bytes value = 120MB/s and you are getting 110 MB/s so its ok. speed drop is a different problem and needs to be...
  9. D

    Proxmox to Proxmox Cluster Migration

    Hi Team, Proxmox to Proxmox migration on old Proxmox RBD to new Proxmox RBD is supported now as I managed to do it. it works flawlessly. Moving VM from one cluster to another is so seamless that users will not even realize that they were moved from one cluster to another. Both clusters have...
  10. D

    Remote migration of cold VM - EERROR: no export formats for 'VM-pool:vm-9999-disk-0' - check storage plugin support!

    I think RBD to RBD is supported now as I managed to do it just now for 1 VM. both the clusters have CEPH running and the VM on RBD successfully migrated to a new proxmox cluster on CEPH again. wonderful but another VM with TPM state did not work - any ideas for this
  11. D

    Slow moving of storage

    Hi Falk, I agree that Mellanox 4 is a PCI-E Gen3 card and even though I have a PCIe Gen5 board it will run at Gen3 speeds but at 16x its 15.7GByte/s which is more than 100Gbit/s so not sure if thats the bottleneck its somewhere else.
  12. D

    Slow moving of storage

    We had received them as IB and had changed them to Ethernet, and thanks for letting me know that we can only get upto 64 Gbit per port so its 6400MByte/s approx and its 4 channels (25G * 4 = 100G) intenrally so each channel should be limited to 1800MB/s * 4 channels but we are still nowhere near...
  13. D

    Slow moving of storage

    Will do that in the night today. VLAN1000 is so that we have the storage network segmented and nobody unauthorized has no access to it
  14. D

    Slow moving of storage

    Ceph config. [global] auth_client_required = cephx auth_cluster_required = cephx auth_service_required = cephx cluster_network = 10.10.10.112/24 fsid = 24cd5cef-ad3f-44a6-a257-a5f6a7080180 mon_allow_pool_delete = true mon_host = 10.10.10.112 10.10.10.217 10.10.10.117...
  15. D

    Slow moving of storage

    Network Config of one of the host. We have 2 more hosts with different IP. We will have 5-6 nodes with 5-6 NVME SSD per node eventually and more as the workload increases.
  16. D

    Slow moving of storage

    Hi Falk, Thanks for your reply. We are in the process of migrating from the old promox cluster - EPYC 7002 based to the new one EPYC 9554 based. so as we keep getting the workload moved to the new cluster we will move the SSD from the old cluster to the new one, these SSD are WD SN650 15.36TB...
  17. D

    Slow moving of storage

    Hello team, I am on the latest version of Proxmox with the enterprise repo. We are moving the disks from a ZFS storage to a ceph storage and all this on NVMe Enterprise SSD PCIE 4.0 * 2 per node = 6 SSD with a 100G Ethernet network. So the IO should be high and it should not be an issue...
  18. D

    Spice with Windows 10 - Mouse Does Not Work

    Just set "tablet: 1" in a new line the file inside /etc/pve/qemu-server/<vmid>.conf and it will be all ok. also get the vmware mouse drivers. it just works perfectly aftet that