I would like to run OpenVZ containers, using the venet device on different networks.
I normally host containers on a private network, protected with firewalls, but I also need for some of these containers to bypass the firewalls and use a different routing/gateway.
I managed to achieve a...
I read the whole thread, and I would give you a suggestion: If CFQ solves the issue, just switch the scheduler to CFQ before backup, and switch it back to noop or deadline for normal operation.
Can't be worse than having a VM mount / ro because of journal write timeout.
I would like to move some openvz containers from a local directory to another local directory which is on shared storage, for HA purposes.
I think that it is not possible to do so using the web interface, so I'm trying to do it manually.
The VZ root directory is in the...
udo, thank you for the numbers. yes, the latency seems huge. Does the server feel slow and sluggish during the test?
The -r parameter is wrong. If you have 1G ivm machine and 16G ram on the node, you should test with -r 16384, otherwise any unused ram on the host node will act as cache...
I'm a strong supporter of software raid. hardware raid has little and expensive memory, slow cpu and so on.
However, I don't buy the complexity thing. You're replacing a level of "simple" complexity (the local raid disk handling) with a complexity which is order of magnitudes higher (i.e. remote...
the following should get you going:
mkdir /mnt/bonnie
chown nobody /mnt/bonnie
bonnie++ -f -n 384 -u nobody -d /mnt/bonnie
Also don't forget to use the [-r ram-size-in-MiB] option to tell bonnie how much physical ram your node has (or you could be benchmarking the cache on your host which is...
Good to see some numbers, symcomm. Maybe I should rename the thread.
All you care for is iops in the 64-128k range, in addition to the 4k range.
I think that your tests are heavily skewed by in-memory cache. The write tests instead seem to show the real thing, topping at aroun 100iops which...
With interconnected switches, you're exercising the stp algo on the switches as they see the same mac address both on one of their ports and on the port of the other switch.
I would try a few tests between two nodes without a switch in the middle, to make sure that the whole balance-rr thing...
Hi,
Want to hear the weirdest problem I've had so far with proxmox ?
I've been running serial consoles on my servers for more than I can remember.
What happens is that when the serial console is enabled (linux ... console=tty0 console=ttyS0,115200n8) I have a pause during the boot process. As...
With 4MB blocks you're probably being limited by the network speed (10G ethernet?).
The kvm seem to incur a 2x penalty, but perhaps you're running in a default iops limit for kvm (which is good to prevent a single kvm bringing down the cluster).
How many servers are you using to host that 36...
I made some more investigation on running ceph+kvm on the same hardware, and according to ceph documentation it's a big no to run mon/osd with virtualization or other concurrent processes, specially when running on the same disks.
My take is that ceph will not work properly on limited resources...
I went forward and tried to upgrade a single 2.3 node to 3.1. The new node seems to be recognized properly by the other cluster members and migration (not live) is taking place.
My guess is that it will "kind of work".
:)
To bring back this thread in topic, I'd like to know what happens when a ceph node resets and comes back online. Do the cluster maintain some sort of "map" where modified blocks for each node(file) are kept or does it start a full-scale resync of the KVM machine images? I am told glusterfs...
I wold like to upgrade a cluster of 3 nodes running proxmox 2.x (originally this was a 1.x cluster).
I cannot stop all running vm (openvz and kvm) for the time necessary to upgrade the cluster.
Can I upgrade one node at a time?
What issues might arise when running during the upgrade process...
I see no problem in your setup. It's actually an old, little, well-kept secret :)
Separating the links over different and independent switches is precisely what you want to avoid the single point of failure and also break the 1G barrier. Unless you want to spend big bucks on stackable and...
I am really glad you made it at the end with the three separate switches (failover and speed together).
I am not sure how much parallelism there is in ceph during the synchronization, but if you're copying from few nodes at a time to recover a failed node, it's surely a nice thing to be able...
Out of my head, the things to try would be:
1. Use even number of links (start with 2 links)
2. Remove the switches and try connecting two nodes with cross cables (test if switches are a problem).
3. Do you have vlans or bridges stacked on the bond interface? Try running on flat bond with no...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.