Thanks for sharing this information. I'll keep ext4 with no-barriers then. After all this is supposed to be a cluster, and one node can fail and still not affect the rest.
I have not optimized stripe/stripe as this is a single raid1 on two disks which are only used for booting proxmox and host some service containters (monitoring and such). All the real VM reside on a separate raid array.
The raid1 array should be able to do around 100IOPS, so 50 fsyncs means two...
Hi,I was investigating a difference I'm seeing in two proxmox servers I work on. They have very similar hardware (sata disks in soft-raid, proxmox 3.1, similar specs).One of them has ext3 as the filesystem, while the other is ext4 (the beefier machine). The ext3 server had 560 fsyncs/sec The...
Thank you for sharing your details. I think that having a 10G network is a good start, but something must be very wrong in my setup:
time touch test{1..1000}
real 0m30.611s
user 0m0.005s
sys 0m0.094s
I'll try tweaking some of the gluster pool parameters.
The script I'm using for testing...
yes, I use xattr=sa and there is roughly a 1:2 penalty for setting it to on.
I made a small program to list xattr from files and mine are barely under the 128 byte limit for zfsonlinux, so I think I'm saving one seek per file creation.
I did a multi-threaded bonnie run on four nodes with 1 4 8...
I posted this in another thread, but I think it belongs here:
ploop fits nicely in the new ceph and gluster direction the proxmox project is headed. Currently containers are not usable on top of ceph and gluster. ploop can fix that, even if it's accessed over gluster, it will still be much...
With the recent ceph and glusterfs additions for Proxmox, I cannot see a reason not to include ploop support in proxmox. In fact I badly need such support, as running containers on top of both ceph and gluster is a pain.
I am sure the php/mysql guys will also love some better performance from...
There's a lot of ports running on proxmox, by default
- The cman
- ntpd
- gluster (in my case, and that cannot be protected easily)
- the proxy on port 8006
I mean... there is a handful for any hacker to play with.
On my current installation, I keep the cluster nodes on a private network...
I'd like to share some experience with you regarding the use of proxmox with zfsonlinux and glusterfs for hosting KVM and OpenVZ images.
The good things is that it works, but using zfsonlinux is a somewhat bittersweet experience.
I love zfs and the way it works on solaris kernels, but the...
qcow/raw one level higher than the FS... problem is I'm using ZFS under gluster and that does not like AIO + cache=none. I resorted to use writetrough or writeback for kvm guests and that makes for a good combination.
I would like to secure a proxmox cluster where the nodes have public IP addresses and there is no separate firewall in front.What would be the best way to proceed?jinjer
Yes, gluster addresses are in /etc/hosts.I'm using gluster 3.4.1the gluster cluster works properly, there are no issues there.I traced the problem down to the cache=none default parameter for caching that proxmox sets. writeback and writetrough work properly.write-trough is supposed to be safe...
Hi all,I built a cluster of 4 proxmox nodes and shared their storage via glusterfs. I'm hitting a problem when trying to set up a kvm image on top of gluster: Proxmox is able to create the images for the disks on top of it (mount etc) but when KVM starts it cannot write to the disk at all. I...
This is nice to know. Quite possibly the osd are talking to each other using different sockets which helps the switch to balance the traffic via normal means (802.3ad).
I'm going with gluster, where this is less of a possibility, as each brick is a whole server and talks to other over the same...
I see where you're going, so I skip the rest of the read. You're correct about iscsi multipath being preferrable to balance-rr, but until we have ceph-multipath or gluster-multipath that is not an option for a scale-out cluster.
OTOH I don't remember the last time I saw a switch fail or a nic...
I'd like to add a final comment on the above. After a restore of the openvz container from a backup, the NEIGHBOUR_DEVS parameter from the <veid>.conf went away (it was removed by proxmox or the vz library).
With that parameter out, I have no issues with the arp stuff anymore. The correct...
Ok, it seems there is a bug in the detection of the proper device on which to respond to arp addresses.
In the default <veid>.conf we have:
NEIGHBOUR_DEVS=detect
This is supposed to work, but it does not. When I change it to the correct interface I get proper arp answers:
NEIGHBOUR_DEVS=vmbr1
I'm close to the solution of the "issue".
There is one last problem and it is the ARP packets.
Openvz is not answering arps on the second interface, where the other network is connected. I see the arp request coming in but there is no arp-reply.
Any idea where to look for this?
EDIT: It...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.