Recent content by mrbeanzg

  1. M

    ISCSI disk status unknown

    sorry here are output : root@proxsvr11:~# iscsiadm -m node 172.30.50.20:3260,21 iqn.2000-05.com.3pardata:20210002ac00635f 172.30.50.21:3260,22 iqn.2000-05.com.3pardata:20220002ac00635f 172.30.50.22:3260,121 iqn.2000-05.com.3pardata:21210002ac00635f 172.30.50.23:3260,122...
  2. M

    ISCSI disk status unknown

    sorry for late reply. the 10gb switches are dead. i have replace them and now i need to restore this two virtual disks. disks are on storage but i cannot see them on proxmox cluster. what can i do now? this is from storage.cfg . The effected disk are not in the file dir: local path...
  3. M

    ISCSI disk status unknown

    Hi all, I need help. Over night some of my iscsi lvm disk report status unknow. I remove them from cluster storage setup and now i cannot reattach them back to cluster. Can someone help with this? I have 4 nodes in cluster connected to hp 3par over iscsi 10gb network. Best regards
  4. M

    Windows memory performance low

    i also try to install windows 2019 instead of proxmox. same things. clearly hp dl380 have some bug in numa settings that i cannot find. i have try every sigle profile in the list in bios and nothing change
  5. M

    Windows memory performance low

    i have try that and didn't work. so basicly im out of options
  6. M

    Windows memory performance low

    thanks for fast reply. Yes i mean a second phisical CPU. I have try set numa to clustered and flat. the performance is the same
  7. M

    Windows memory performance low

    Ok the new insight is if i remove the second processor, the memory throuhput is doubled. have any of you have that kind of problem. i know this isn't the proxmox problem anymore but someone might help. Thanks
  8. M

    Windows memory performance low

    HI all, I have multiple windows 10/11 installations and the perfromance isn't great. First thing that i have notice is memory performance. The best score i get are around 3500mbps in VM and the host have 24000 mbps. i have try to set different cpu type and flags, numa, balooning on/off...
  9. M

    [SOLVED] Problems after upgrade from pve 6 to pve 7

    Hi all,Just to report that upgrade to pve-qemu-kvm: 6.1.0-3 works. high load windows vm dont crash anymore. Best regards
  10. M

    [SOLVED] Problems after upgrade from pve 6 to pve 7

    Thanks Fabian. Do i need to reboot the server after upgrade?
  11. M

    [SOLVED] Problems after upgrade from pve 6 to pve 7

    This is what i have : root@prsvr5:~# pveversion -v proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve) pve-manager: 7.1-4 (running version: 7.1-4/ca457116) pve-kernel-5.13: 7.1-4 pve-kernel-helper: 7.1-4 pve-kernel-5.13.19-1-pve: 5.13.19-2 ceph-fuse: 15.2.15-pve1 corosync: 3.1.5-pve2 criu...
  12. M

    [SOLVED] Problems after upgrade from pve 6 to pve 7

    This thread that you include have the same errors as i have. How can i check the version and upgrade if needed? Thanks
  13. M

    [SOLVED] Problems after upgrade from pve 6 to pve 7

    This is the only error that is recorded when the vm become unresponsive. Jan 10 20:00:34 prsvr5 pvedaemon[341452]: VM 115 qmp command failed - VM 115 qmp command 'guest-network-get-interfaces' failed - got timeout What can i do about that?
  14. M

    [SOLVED] Problems after upgrade from pve 6 to pve 7

    Hi all, I have 4 node cluster connected to hpe 3par via 10gb nexus network switch. this 4 nodes are at version pve-manager/6.0-4. I have added fifth server to same cluster with version pve-manager/7.1-4. If i migrate high load windows vm to that fifth server, after some time ( random times ...