Search results

  1. D

    VM only uses 50% of CPU

    Hi, I have a proxmox 7.3.4 host with 4 sockets with 16 cores each (AMD Opteron 6282 SE), total 64 cores I created a Windows VM on proxmox and allocate 64 cores to this VM to use all cpus available (only 1 vm per host) when I run a cpu stress test on VM, only 50% of the all cores is at 100%...
  2. D

    Proxmox with iSCSI Equallogic SAN

    57810S an X710 support NPAR, the best setup is to use it on your environment.. y can then segregate iscsi from LAN using virtual ports... just put MXL internal ports to hosts on "trunk mode", tag all vlans y use on these ports and use two virtual NPAR ports (one from each side of MXL "A1 and...
  3. D

    Proxmox with iSCSI Equallogic SAN

    Dont your nics support NIC partitioning? check this!! We also use M1000E and our nics is Qlogic 57810S, these nics support partitioning, you can allocate 8 virtualized nics to the Host OS, instead of two physical.
  4. D

    Proxmox with iSCSI Equallogic SAN

    We disabled LRO/GRO only on iscsi interfaces!!! and use MTU 9000 on these interfaces!!! bridges/vlans use default MTU. To check if the options are heritable or not y can use ethtool to verify it!!
  5. D

    Proxmox with iSCSI Equallogic SAN

    EQL works as active/stb failover controller, also the firmware has a vertical port failover in case of switch down... the group iscsi IP is the central portal point of connection for a correct setup.. do not connect directly to the controller port ip. I use group ip for multipahd, on each...
  6. D

    Proxmox with iSCSI Equallogic SAN

    Yes, with multpath.. I am using multipahd!!
  7. D

    VM live migration between hosts with dif storage config

    Thanks Fabian... I'll follow this bug tracker in order to see if any improvement on this request was made to allow this kind of storage migration!! Regards,
  8. D

    VM live migration between hosts with dif storage config

    Even with targetstorage option.. after the command is entered the CLI says it started the migration.. but nothing happens afterwards. I think this behavior can be easily treated, the shared flag doesn't means it MUST be shared between all cluster nodes... it should read the "nodes list"...
  9. D

    VM live migration between hosts with dif storage config

    Hi, I have a cluster with 4 nodes, 2 access one iscsi storage and other 2 access a dif storage, the storage A is only shared betwen host A and B, storage B is only shared between host C and host D, this is because they are in dif sites connected by fiber channel network, each site has a storage...
  10. D

    PVE hosts grayout if a SMB storage mount timeout

    yes.. only for backups.. root@host03-pve:~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content iso,backup,vztmpl maxfiles 1 shared 0 lvmthin: local-lvm thinpool data vgname pve content rootdir,images cifs: backup-vol01...
  11. D

    PVE hosts grayout if a SMB storage mount timeout

    I can do that.. but this is happening for a while ... since version 6.2.1 when we installed this environment...
  12. D

    PVE hosts grayout if a SMB storage mount timeout

    The only error/info in the syslog when it happens is below: Nov 11 22:24:03 host03-pve kernel: [2418813.594238] CIFS VFS: \\192.168.64.37 has not responded in 180 seconds. Reconnecting... Nov 11 22:25:04 host03-pve kernel: [2418875.031053] CIFS VFS: \\192.168.64.37 Send error in SessSetup = -11...
  13. D

    PVE hosts grayout if a SMB storage mount timeout

    Hi, We have a PVE cluster with 5 nodes, everything works great!!! the only problem is that when a SMB mount (PVE shared storage for VM backups) timeout ... the cluster stay all grayout... and we cant manage any VM on this cluster.. everything display as grayout and stays unavailable for...
  14. D

    VZDUMP slow read over NFS

    So, using NFS for vzdump probably is a bad idea as you tested!!! have you tried the PBS instead of native vzdump?
  15. D

    VZDUMP slow read over NFS

    iSCSI and NFS are two completely different things!! iSCSI uses LVM and NFS is a completely different protocol!!!
  16. D

    VZDUMP slow read over NFS

    OK, now I understood!!! maybe it's because the way vzdump executes the backup process.. it first copy the backup to NFS server target and zips it afterwards... this will result in too many small files to copy do NFS.. lowering the performance on copy process... When you use other type of...
  17. D

    VZDUMP slow read over NFS

    You said you are getting 115MB/sec during backups using a 1GBit network.. for me it's almost the maximum you will get for a gigabit network!!! Or I'm reading something wrong....
  18. D

    VZDUMP slow read over NFS

    Hi, NFS is a network based transfer protocol.. what is your network speed from backup server to nfs server ? 115mb's is a good performance if your network is 1GBps (not good, but it's the limit you can get with 1Gbps)... also keep in mind that the performance is not only related to network...
  19. D

    store to nas

    I tested PBS with CIFS.. its works.. but the performance is very bad for read/writes... I don't know why!!!
  20. D

    VZDUMP slow read over NFS

    Hi, I saw the same bad results using CIFS as backend storage... very slow... using local disk as ZFS improve a lot the backup performance.. but some times there aren't other options to grow backup storage.. so, it's something that must be investigated/validated by proxmox team!!

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!