Recent content by jsengupta

  1. J

    Enhancement Request

    Enhancement tracker created: https://bugzilla.proxmox.com/show_bug.cgi?id=4244
  2. J

    Enhancement Request

    Currently the logs that are being generated for events like change of a VM memory from 2048 to 2080 would generate a log like the one provided below. Sep 08 08:24:08 pve-cl2 pvedaemon[1379]: <root@pam> update VM 101: -delete balloon,shares -memory 2080 As visible that it cannot be understood...
  3. J

    Enterprise SSD, Dell R730xd servers, 20Gbps link, still Ceph iops is too low

    There are 2 class of drives in our environment. HDD and SSD. And 2 pools have been created based on the class based crush replication rule. If I go ahead and benchmark IOPS based on the class based pool, am I not doing it in correct way? What I am thinking so far is as below: rados bench -p...
  4. J

    Enterprise SSD, Dell R730xd servers, 20Gbps link, still Ceph iops is too low

    I have additionally tested proxmox ceph in my laptop. My laptop is running with NVMe WDC PC SN530 SDBPMPZ-512G-1101. I am getting around 90,000 write IOPS from my installed windows 11. I have now spun up 3 Proxmox nodes using Oracle Virtualbox and created a ceph cluster of 10GB disk each from...
  5. J

    Enterprise SSD, Dell R730xd servers, 20Gbps link, still Ceph iops is too low

    Hi, First of all thank you for your gentle and informative reply. I have once again issued the Rados benchmark command. During the execution of the command the Ceph dashboard gives the following output: However, I cannot figure out how we can monitor the cores from htop command because of...
  6. J

    Enterprise SSD, Dell R730xd servers, 20Gbps link, still Ceph iops is too low

    All of them are in HBA mode. As you said, the only problem is the too low iops. That is why we cannot put any databases into this pool. If you take a look at the specification of this SSD that we are using here, you will see each of the disks will give you 75,000 random write iops. We are...
  7. J

    Enterprise SSD, Dell R730xd servers, 20Gbps link, still Ceph iops is too low

    The rados benchmark is giving 2500 iops. It is before setting up a VM. Is it really depending on the setting that you have mentioned?
  8. J

    Enterprise SSD, Dell R730xd servers, 20Gbps link, still Ceph iops is too low

    This is what are we getting from the SSD Pool: root@host3:~# rados bench -p Ceph-SSD-Pool1 10 write --no-cleanup -b 4096 -t 10 hints = 1 Maintaining 10 concurrent writes of 4096 bytes to objects of size 4096 for up to 10 seconds or 0 objects Object prefix: benchmark_data_host5_1675810 sec Cur...
  9. J

    Enterprise SSD, Dell R730xd servers, 20Gbps link, still Ceph iops is too low

    Hi, We are running a 3 node cluster of Proxmox ceph and getting really low iops from the VMs. Around 4000 to 5000. Host 1: Server Model: Dell R730xd Ceph network: 10Gbps x2 (LACP configured) SSD: Kingstone DC500M 1.92TB x3 Storage Controller: PERC H730mini RAM: 192GB CPU: Intel(R) Xeon(R) CPU...
  10. J

    Backup verification takes 4-5 days to complete

    How can we do faster verification? If we create ZFS with journal drive having SSD and the data drive having the HDD, will the verification jobs be faster? Has anybody tested this way?
  11. J

    Backup verification takes 4-5 days to complete

    The setting is there already. The query i have asked is after applying the settings you have mentioned. Does it really matter if we verify backup?
  12. J

    Backup verification takes 4-5 days to complete

    Hi, we are taking Proxmox VM backup by Proxmox backup server. There are 10 large VMs having around 2TB of storage on each of the VMs. We do not have any issue backing up the VMs. However, the verification job takes around 4-5 days to complete and create high IO wait. sometimes even the...
  13. J

    Changing the CPU type from KVM64 to [host] slows down the VM

    VMs are working fine now even after making the cpu type to [host]. Back at that time we had different issue that we did not take a look at.
  14. J

    Changing the CPU type from KVM64 to [host] slows down the VM

    I want to change the CPU type of the VM from kvm64 (Default) to [host]. But after restarting the VM, it is operating slow. Disk IOPS drops. Do I need to perform any extra tweak? Please note, the VM is running Windows 2016 with Virtio 0.1.208 (Latest) installed.