Search results

  1. J

    Bonded ethernet uplinks broken in pve-common 9.1.1

    well i got myself a simple patch but its not well tested so im not gonna post it out at least now i can start those vms
  2. J

    Bonded ethernet uplinks broken in pve-common 9.1.1

    yes when we use qm start <VMID> we got no physical interface on bridge 'vmbrN' and it is not possible to start VMs now when using bond as physical interface for vmbr
  3. J

    CPU tunning for Windows 11 VMs

    I’m not familiar with commercial hardware but if you want performance maybe you need to avoid using E-cores, just pin your vm with P-cores
  4. J

    CPU tunning for Windows 11 VMs

    unfortunately it’s not working like that
  5. J

    Migrating Windows VM to new PVE cluster

    yeah i heard about that too.
  6. J

    Pointers on shared storage

    Thanks for mention there is a singe point issue so actually in my situation all the iscsi or nfs services are all coming from a dss to take care of the HA problem.
  7. J

    Migrating Windows VM to new PVE cluster

    is it a volume licensing I think it will be fine. Not quite sure about what kind of license we are talking about.
  8. J

    Pointers on shared storage

    just need more information as you mentioned before you run 6 exsi hosts , so i guess its not a vSAN cluster? correct me if im wrong. there is still something missing about your workload, you have 20 VMs around but what kind of the bandwidth or iops that we are talking about. according to your...
  9. J

    how to migrate vms to another pve cluster

    wow really looking forward to that
  10. J

    how to migrate vms to another pve cluster

    yeah i knew that for like centuries and it seems to be the only smart way to do this
  11. J

    how to migrate vms to another pve cluster

    i need to move several vms from one pve cluster to another , is there any tricks i can use instead of backup and restore?
  12. J

    [SOLVED] Imported ESXi CentOS VM does not boot

    ive met this once but its the bios and uefi kind of problem you might look into that just for sure you got the same bios configuration between esxi and pve and also make sure you did the correct boot order;)
  13. J

    Enterprise SSD, Dell R730xd servers, 20Gbps link, still Ceph iops is too low

    and i noticed you have one H730mini and two HBA330mini, did you put them all on HBA mode yet cause raid has bad performace for ceph.
  14. J

    Enterprise SSD, Dell R730xd servers, 20Gbps link, still Ceph iops is too low

    well i have pretty much the same situation and i dont think the iops is an issue for me , so is there anything really bothering you or just the low iops makes you wondering why?