LACP requires that each connection goes over a single link. This way packet order is ensured and it mimics a normal ethernet link.
In LACP, the "sender" decides which link to send packets based on a hash function taking variables from L2, L3 or L4 or a combination of them (i.e. MAC address, IP...
with 802.3ad each connection is only allowed to travel on one link. This is ok as long as you have multiple concurrent connections to different clients. Eventually they even out. Sometimes the hashing algo will not allocate evently between the interfaces.
balance-rr is different: it is load...
Thank you. Have you considered bonding with balance-rr using multiple switches and no 802.3ad?
Also, did you do some benchmark from inside a proxmox kvm running on the above?
Thank you for pointing this out. I refer to wheezy because we're running a wheezy based distro with a redhat kernel. I guess a recompile from source for ceph will be necessary anyway. And possibly a recompile of the source packages for debian will be an easier thing to do than using the...
I have not tried (yet) to install ceph on proxmox 3.1.
The wiki says that it's not currently possible.
The ceph team has a "howto" for installing ceph on debian wheezy.
Is the wiki not update or is it that ceph won't run on 3.1 (perhaps because of the kernel?)
yes, that would be the only way to use the disks in the computing nodes.
Ideally ceph would be disk-intensive and proxmox would be cpu intensive = better use of the hardware.
he ram could be expanded to whatever is necessary for ceph + kvm/openvz.
There are some question marks however:
1. I...
Hi all,
I've been working on a idea for a proxmox based "cluster in a box".
A couple of years ago, I installed something similar on a Intel modular server that it's still running. The main problem is the very low performance of the disk subsystem which is a real bottleneck.
What I would like...
Welcome aboard. The thing with ceph is that is seems the "promise land of the future". However, all the benchmarks I see do not take into account a real cluster with several nodes connected running some real load. All you get is a single OSD runnings against a single client with some...
symmcom, I see where you come from. Many things have changed in 3.4 too and put me off until i started using the "new commands way".
I have no experience with ceph. I'm trying to understand how to proceed before I build a test cluster. Normally my "clusters" are made of 3-4 nodes at max. Since...
What was the reason for you settling on ceph? was it a performance/feature reason or only the mis-understanding that gluster can run on two nodes only ?
I'm wondering about the speed of ceph for resyncing of large files (i.e. kvm images) in the event of a failure of one of the nodes.
Say a node in a ceph cluster is reboot, taking away images for a few minutes.
In glusterfs it will take a read-back of the whole vm storage to resync images.
Is...
Yep... I use zilstat and arc_summary on my 'solaris' servers. Unfortunately you can't run them on linux, so at least I'm left guessing the correct sizes from "experience", but being my first linux zfs install, I'm not sure how much of the solaris stuff applies to linux.
ops. I wrote "not common practice" while I meat "it is now common practice".
Sure, ram is king for zfs. At the end it all depends on your data and access pattern. Say you need a separate zil but are cost conscious: It's a pita to use a big mirror of SSD disks just to store 1-2Gig (at best) of...
Thank you for pointing that out. My take is that some of this information is not current.
The issue endianness of different platforms is only theoretical. I don't plan to mount this pool on some motorola or risc hardware, as I don't plan mounting it in solaris too. I'm quite sure I can mount it...
I ended up making a node with 8 x 1TB 2.5 hard disks spinning at 7200rpm.
proxmox was installed on two of these hard disks by manually partititioning and using only 64GB per disk. I them manually converted the install to md linux soft raid1.
The rest of the disks became the first vdev of the...
Ouch... same issue indeed. I still think its unfortunate as a FS can fail to mount for any reasons. Initializing an empty directory with default data shall not be allowed to happen at any other time than at setup (i.e. when adding the storage via the gui, or when explicitly forcing a storage to...
I've been experimenting with zfs onlinux in the last few days.
ZFS is running fast and stable and all looks great.
There is however a small issue with proxmox and the shares mounted from zfs.
The preconditions are:
By default, zfs will not mount a zfs dataset in a directory if the directory...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.