Hi,
I have a proxmox 7.3.4 host with 4 sockets with 16 cores each (AMD Opteron 6282 SE), total 64 cores
I created a Windows VM on proxmox and allocate 64 cores to this VM to use all cpus available (only 1 vm per host)
when I run a cpu stress test on VM, only 50% of the all cores is at 100%...
57810S an X710 support NPAR, the best setup is to use it on your environment.. y can then segregate iscsi from LAN using virtual ports... just put MXL internal ports to hosts on "trunk mode", tag all vlans y use on these ports and use two virtual NPAR ports (one from each side of MXL "A1 and...
Dont your nics support NIC partitioning? check this!! We also use M1000E and our nics is Qlogic 57810S, these nics support partitioning, you can allocate 8 virtualized nics to the Host OS, instead of two physical.
We disabled LRO/GRO only on iscsi interfaces!!! and use MTU 9000 on these interfaces!!! bridges/vlans use default MTU. To check if the options are heritable or not y can use ethtool to verify it!!
EQL works as active/stb failover controller, also the firmware has a vertical port failover in case of switch down... the group iscsi IP is the central portal point of connection for a correct setup.. do not connect directly to the controller port ip.
I use group ip for multipahd, on each...
Thanks Fabian...
I'll follow this bug tracker in order to see if any improvement on this request was made to allow this kind of storage migration!!
Regards,
Even with targetstorage option.. after the command is entered the CLI says it started the migration.. but nothing happens afterwards.
I think this behavior can be easily treated, the shared flag doesn't means it MUST be shared between all cluster nodes... it should read the "nodes list"...
Hi,
I have a cluster with 4 nodes, 2 access one iscsi storage and other 2 access a dif storage, the storage A is only shared betwen host A and B, storage B is only shared between host C and host D, this is because they are in dif sites connected by fiber channel network, each site has a storage...
The only error/info in the syslog when it happens is below:
Nov 11 22:24:03 host03-pve kernel: [2418813.594238] CIFS VFS: \\192.168.64.37 has not responded in 180 seconds. Reconnecting...
Nov 11 22:25:04 host03-pve kernel: [2418875.031053] CIFS VFS: \\192.168.64.37 Send error in SessSetup = -11...
Hi,
We have a PVE cluster with 5 nodes, everything works great!!! the only problem is that when a SMB mount (PVE shared storage for VM backups) timeout ... the cluster stay all grayout... and we cant manage any VM on this cluster.. everything display as grayout and stays unavailable for...
OK, now I understood!!! maybe it's because the way vzdump executes the backup process.. it first copy the backup to NFS server target and zips it afterwards... this will result in too many small files to copy do NFS.. lowering the performance on copy process... When you use other type of...
You said you are getting 115MB/sec during backups using a 1GBit network.. for me it's almost the maximum you will get for a gigabit network!!! Or I'm reading something wrong....
Hi, NFS is a network based transfer protocol.. what is your network speed from backup server to nfs server ? 115mb's is a good performance if your network is 1GBps (not good, but it's the limit you can get with 1Gbps)... also keep in mind that the performance is not only related to network...
Hi,
I saw the same bad results using CIFS as backend storage... very slow... using local disk as ZFS improve a lot the backup performance.. but some times there aren't other options to grow backup storage.. so, it's something that must be investigated/validated by proxmox team!!
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.