Search results

  1. D

    Serious performance and stability problems with Dell Equallogic storage

    Just to update... I had great performance improvement on queue depth disabling the LRO/GRO on iscsi interfaces of the proxmox.. the main problem is because the debian does not have a way to disable DELAY ACK on tcp packet transmission.. soh disabling the hardware acceleration options in...
  2. D

    Serious performance and stability problems with Dell Equallogic storage

    Hi, did you solve the Avg Queue Depth with proxmox and EQL? we're seeing the same problems here!!! Avg queue depth is above 5000, but the performance on guest is good!!! around 500mb/s for read/write.. firmware 10.0.3 on EQL..
  3. D

    Proxmox with iSCSI Equallogic SAN

    HIT KIT is not supported on Debian... Only RHEL or SLES..
  4. D

    Proxmox with iSCSI Equallogic SAN

    Thanks... but in my case is because the DELAY ACK on TCP packets.... this is a requirement for EQL storage.. and linux/debian dont support disabling it..
  5. D

    Proxmox with iSCSI Equallogic SAN

    We are observing only tcp retransmission.. above 5% in some periods.. but it was higher when the interface MTU was in 9000 .. We lowered the mtu to 1500 it dropped the tcp retransmission a little bit.. but not 100%... I'm also searching why the MTU is also causing high tcp retransmission... my...
  6. D

    Proxmox with iSCSI Equallogic SAN

    Yep.. I know.. but I coud not find it anywhere thought linux/debian... I'm trying to make it work right now using iscsi offload from my broadcom nic (57810).. using software iscsi (open-iscsi) I think it's not possible to disable delay ack.
  7. D

    Proxmox with iSCSI Equallogic SAN

    Hi, I've setup a environment with proxmox 6 and iSCSI Equallogic (DELL), this kind of storage requires some TCP modifications to work correctly with multipath.. One feature that is necessary for this storage to improve iscsi performance is "Disable DelayACKs" on Hosts connected to the...
  8. D

    RAM Usage

    Yes, even haven't absolutely nothing running inside the guest, the linux kernel will buffer/cache most of the available memory.. you can execute the command: "echo 3 > /proc/sys/vm/drop_caches" inside the guest.. you will see the cached memory instantly drop, the proxmox GUI will show the...
  9. D

    RAM Usage

    Linux Kernel will always cache all the available memory for sleeping process, that's way the Proxmox GUI always says all memory is in-use. These cached memory will be take back for the proxmox host by the native bollooning driver inside the guest when the available physical memory is below 20%...
  10. D

    RAM Usage

    the GUI also shows the CACHED guest memory.. execute "echo 3 > /proc/sys/vm/drop_caches" inside the guest. than, see the GUI again!!!
  11. D

    Proxmox 6 with cache=writeback and modern kernels VMs

    If I lose all together at same time or any single of it will lose data too ?
  12. D

    Proxmox 6 with cache=writeback and modern kernels VMs

    Yes, I read this, but there is no answer for my doubt.. not quite clear about what scenarios can we use or not writeback enabled!!!
  13. D

    Adjust CPU graphs based on CPU ratio limit

    which one do you recommend, without the need of installing an agent on guest side?
  14. D

    1 SSD and 2 HDD - best storage setup?

    Yep, I meant filesystem extend but wrote partition... But I didn't know ext4 support online extend, I thought only XFS could do that because I started to use only XFS a long time ago... Thanks for the clarification!!
  15. D

    1 SSD and 2 HDD - best storage setup?

    I'm using proxmox clusters on top of XFS without any problems...also it's better than EXT4 when you store large files... I think it's a customer decision rather than proxmox option.. As I said, I would go for MD... but it's up to you!! if you use ZFS without ZIL/ARC cache SSD disk, the ARC...
  16. D

    1 SSD and 2 HDD - best storage setup?

    XFS is better than EXT3/4.. also it supports online partitions change/extend without reboot..
  17. D

    1 SSD and 2 HDD - best storage setup?

    If you are going to use the NVME as VM storage, install Proxmox on MD/XFS, and make VM daily backups on these disks.
  18. D

    1 SSD and 2 HDD - best storage setup?

    Depending the NVME size, the best option is to use it for VM storage.. if it's too small in size, use as ZIL/ARC cache only (entirely) and maintain the VMs on HDD disks.
  19. D

    Shared Storage Comparison

    Regarding the Snapshots, I was talking about taking snapshots from guests using proxmox GUI, not on LUN side, because the only filesystem on top of iSCSI that supports VM Snapshots is ZFS over iSCSI, LVM-thick does not support VM snapshots. About backups, if your NAS is using ZFS for filysystem...
  20. D

    Shared Storage Comparison

    The best option depends on the hardware/infrastructure environment you have... If you have enough hardware with local disks and at minimum of 1 SSD disk per node, and is comfortable with the lost area when you choose the replication size of a ceph cluster (usually 3/2)... go to ceph.. If you...