Seconded, provided it would allow to aggregate several links bandwidth for a single tcp connection...
This would be very nice for links aggregation and Failover of drbd connections over 2+ GbEth paths
Think I had already posted this, so thanks for debunking ;-)
Envoyé de mon A0001 en...
It would have needed to be EthoIB to support VLANs ?!
Could MPTCP offer an alternative http://www.multipath-tcp.org ? About it, Could it be used to aggregate several 1gbps links throughput for a single tcp connexion like for a drbd resource ?
Thx for your insights
@jinger about switch failure... had 2 switch death only in the last 6 month (and at the same client !). Sh@#€%& happens man, deal with it or you'll be bitten hard.
About improving network path load balancing and redundancy... couldn't Multipath TCP offer a good alternative to Ethernet LAGs ...
And when you have several sources configured, use `apt-cache policy <pkgname>` to see the available versions of a package.
Then, once you begin cherrypicking from different repos, you may want to configure repos priority and package pinning to secure your upgrades
Very interesting thread !
Point cleared, thanks !
So in any configuration with storage subsystems providing more than 300MB/s, 10GbEth link is _mandatory_ for DRBD if you don't want the network to be the bottleneck !?
Now I'm thinking of distributing my DRBD links on several [2|3]xGb...
I have a pair of HP ML350 G6 servers with LSI MegaRAID 9266-4i+CacheVault driving Supermicro CSE836E16 enclosures via SAS bracket, filled with WD RE 3TB SATA drives. On these are configured a RAID10 array with one big virtual/logical drive with write caching and read-ahead...
It is indeed !
Would you tell us how you configured your RAID ? Single Array on each node ? Which level ?
Only made introductory readings about Ceph but as I understood it, one of it strength seems also to provide the redundancy feature instead of dedicated HW controlers (but doesn't replace...
Could you please give us example(s) of advanced routing and vlans troubles one could face ?
Does this depends directly on the subnet mask (like initialization made from the start for 65k devices), or on the real effective number of devices on the network ?
Thank you in advance for your...
Ok, this better explain inclusion in qemu code base... and further would be a great contribution to it as well as it would solve the availability on other platforms.
Hope qemu devs Will accept it soon
Thanks spirit for this clarification
Keep it on yall
LVM snapshots (over DRBD devices) and tar.lzo backups are doing exactly what I need and want, and they're doing it well... AFAIC !
I'm afraid my english is even worth that I've ever thought... I _never_ said it was not working, buggy or made any quality statement about the code...
I told my gratefulness right from the start. I advocated PVE for years to everyone interested I've talked to and on all social networks I'm active on.
I wish I was a Proxmox customer but this is not my decision not to pay and, in any case, calling your users "baby" is surely not the good...
AND YOU CALL THIS BACKUP ??!!!!
Please believe it's hard to say so because I'm so much grateful for the work done over the years providing us this great platform. But this switch to a new backup archives format couldn't have been done more badly :(
- For first and at the very least, this change...
thank you for your quick answer.
I guess the 'no' answer to the second question is implied !
As VMA is in a branch of your Qemu tree... are there any plans to submit it upstream ? What's upstream attitude about this ? Will the major issue raised by your answer be later solved by some qemu tool...