Hi,
Some nodes never showed up this message (for example dc-prox-22, dc-prox-24 and dc-prox-26).
For the other nodes, seems that it never shows up at the same time as you can see :
Hello,
I just built 6 new pmx v6 nodes (uptodate) with same hardware.
Had 2 links per node (2 lacp bond on 2 Intel X520) :
bond0 : 2x10 Gb (Management and VMs prod - MTU : 1500)
bond1 : 2x10 Gb (Ceph Storage - MTU : 9000)
bond0 is declared as primary corosync link (link 0), bond1 as link 1...
Hi,
First thing first, I know it is not recommended to use raid 0 disks beyond ceph, however that's what I did on 4 Dell R430 servers with Perc 730 (with 6 15k SAS drives). I have pretty descent performance with it and absolutely no issues for the last 2 years. With full SSD nodes I don't use...
Hi,
Corosync 3 doesn't use multicast anymore. It uses unicast.
Ok, so I guess the cluster traffic should grow a lot for clusters involving more than 3 nodes ?
Thanks in advanced,
Antoine
Hi,
We plan to reactivate barrier. The solution is to add lots of RAM for MySQL innodb_buffer_pool_size.
I guess you're right, 10GB with 20 SSDs maybe not enough.
Thanks to all of you guys :)
Hi,
Yes benchmarks are done. Results are almost similar to what Proxmox team did.
Benchmarking now with 40 vms on production could be irrelevant.
Mysql instances perform about 500 to 1000 queries/sec each.
If nothing could be done, no choice I must settle barrier=1 and keep writeback for vm...
Hi,
I noticed that the only way to get low iowaits with ceph is to settle VM disk cache to writeback.
But it's not enough, with mysql (innodb) we still have high iowaits on high load. We have to disable barrier on ext4 mount options. After that, disk performance is OK.
On a 5 nodes cluster...
Hi,
Reboot node was mandatory to fix the issue.
I deplore that lxc is so shitty.
Ok there is no overhead compared to kvm but there are lot of limitations (live migrate first of all, until recently nfs/cifs mounts...).
IMHO if you want containers you'd better use swarm/kubernetes. (inside a...
Is there a way to shutdown 50011 veth interfaces ?
ip a | grep 50011
36: veth50011i0@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v50 state UP group default qlen 1000
38: veth50011i1@if37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v193...
Hi,
I first tried to shutdown the container but it returned a timeout (60s). Then tried to stop but also failed.
pct list
VMID Status Lock Name
50011 running dc-btsip-01
So I decided to kill the related lxc process but wasn't been able to find it ...
Hi,
Yes it does. I suggest to keep guest-fsfreeze in order to have consistent backups.
Mandatory for SQL databases. I use it to backup a ZFS disk for my SQL Server 2016 (every 2 hours). It locks FS only 2 or 3 seconds.
If it's not the point, you could create your own scripts...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.