Trying to find out if any progress has been made in multi-cluster HA. Like the previous poster, I have multiple DCs. I have 10G at each side and <15ms latency between BUT I don't really love the idea of trying to cluster across that.
yes, stayed up super late last night to check the bios after hours. It's up-to-date and there are no knobs for memory. Bios is 'I20' which is HP's current listed max for this BL260cG5...
What is curious is that this seems to be a throughput between the ram and the CPU. If I scale the threads...
No ECC options in the BIOS
BIOS power savings is set to 'maximum performance'
DIMMS are 6x 4GB DDR2 667
dmidecode:
Handle 0x1100, DMI type 17, 23 bytes
Memory Device
Array Handle: 0x1000
Error Information Handle: Not Provided
Total Width: 72 bits
Data Width: 64...
1 pass should be more than enough to determine performance issues. I'm not seeing malfunctions from bad bits here.
Took 2 hours 50 minutes for 1 pass on 24GB ECC ram. I think that's high but the only system I could compare it to has DD4, it took 44 minutes for 24GB ECC.
This is a bit of an extension of this thread:
https://forum.proxmox.com/threads/kvm-performance-issues.42635/#post-204952
*but*
My identification of the issue was misguided.
What I've found is memory performance on the host is just 277MB/s. This is pulled from the sysbench --test=memory...
yeah, I'm seeing that. one of my other prox hosts gives 3465MB/s in KVM and 3922MB/s on the host. I'm only getting 267MB/s on LXC and the Host here, I think my LXC results before were not accurate. ECC Ram in this server so I suspect I have a bad chip and ECC is masking that with slow...
Is there any setting to isolate the LXC container from sharing the 'load' from the host in top/htop?
Was a little shocked when looking at load of 1.5 in the container with nothing running until I figured out what was going on. Can make solving load issues in a container pretty difficult and...
No, but I found something peculiar.
running `sysbench --test=memory run`
in LXC I get 277.65MB/s
in KVM I get 41.10MB/s
I did the same tests with CPU and they are nearly identical.
Is there something in KVM that would be limiting memory rates?
v5.1-41
as a cheap benchmark, I'm compiling asterisk on centos7
kvm cpu type = host, numa=on
specs for KVM and LXC vm/container same
In VM, this takes 25 Minutes,
in LXC, this takes 7 Minutes.
Any idea where to start to identify this performance gap?
I'm actually running asterisk on...
I'm looking for a way to set CPU affinity in proxmox 5. Is this doable?
I'm aware of this thread:
https://forum.proxmox.com/threads/numa-config-option.21313/#post-108514
But I'm not wanting just NUMA awareness, I'm wanting to set certain vms to certain CPUs (and others to NOT those CPUs).
so, ceph w/ 3 monitors and 2 OSD hosting nodes (with 6 OSD per) or DRBD on the 2 main nodes?
Maybe my logic needs a sanity check here, but DRBD w/ LVM on top *doesn't* need a quorum because proxmox will have quorum yes? There is no chance that data same-blocks will be written to by both prox...
So after having a ton of issues getting glusterfs working, I'm exploring ceph. My 2 primary hosts have 6 drives each.
My delima here is that if I install prox on 2 drives in a mirror, I waste a TON of space and only have 4 drives left per host. That means a total of 8 OSD with ceph.
I'm...
All except type 'SATA' throws the error immediately. 'SATA' doesn't right away, but locks up for a few minutes and then throws the error. Have tried all disk systems with qcow backend as suggested in the wiki with the same results. Tried ubuntu 16.04 server amd64 installer, and debian 8.5...
Fresh install, get a 2 node gluster volume going on a dedicated network
cluster network (1G NICs to a switch):
p1 10.100.100.10
p2 10.100.100.11
p3 10.100.100.12 (quorum/management only)
gluster network (10G NICs direct between nodes)
p1g 10.100.101.10
p2g 10.100.101.11
create a volume 'vm' in...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.