Search results

  1. J

    OpenVZ (venet) containers on different interfaces and networks

    I would like to run OpenVZ containers, using the venet device on different networks. I normally host containers on a private network, protected with firewalls, but I also need for some of these containers to bypass the firewalls and use a different routing/gateway. I managed to achieve a...
  2. J

    High server load during backup creation

    I read the whole thread, and I would give you a suggestion: If CFQ solves the issue, just switch the scheduler to CFQ before backup, and switch it back to noop or deadline for normal operation. Can't be worse than having a VM mount / ro because of journal write timeout.
  3. J

    High server load during backup creation

    It's very easy to saturate nfs with the amount of writes that the backup creates. To simulate the same load try dd if=/dev/zero of=/mnt/nfs/out bs=1M
  4. J

    Move openvz containers from local storage to shared storage

    I would like to move some openvz containers from a local directory to another local directory which is on shared storage, for HA purposes. I think that it is not possible to do so using the web interface, so I'm trying to do it manually. The VZ root directory is in the...
  5. J

    Ceph large file (images) resync

    udo, thank you for the numbers. yes, the latency seems huge. Does the server feel slow and sluggish during the test? The -r parameter is wrong. If you have 1G ivm machine and 16G ram on the node, you should test with -r 16384, otherwise any unused ram on the host node will act as cache...
  6. J

    Ceph large file (images) resync

    I'm a strong supporter of software raid. hardware raid has little and expensive memory, slow cpu and so on. However, I don't buy the complexity thing. You're replacing a level of "simple" complexity (the local raid disk handling) with a complexity which is order of magnitudes higher (i.e. remote...
  7. J

    Ceph large file (images) resync

    the following should get you going: mkdir /mnt/bonnie chown nobody /mnt/bonnie bonnie++ -f -n 384 -u nobody -d /mnt/bonnie Also don't forget to use the [-r ram-size-in-MiB] option to tell bonnie how much physical ram your node has (or you could be benchmarking the cache on your host which is...
  8. J

    Ceph large file (images) resync

    Good to see some numbers, symcomm. Maybe I should rename the thread. All you care for is iops in the 64-128k range, in addition to the 4k range. I think that your tests are heavily skewed by in-memory cache. The write tests instead seem to show the real thing, topping at aroun 100iops which...
  9. J

    Ceph large file (images) resync

    With interconnected switches, you're exercising the stp algo on the switches as they see the same mac address both on one of their ports and on the port of the other switch. I would try a few tests between two nodes without a switch in the middle, to make sure that the whole balance-rr thing...
  10. J

    Weirdest problem with serial console

    Hi, Want to hear the weirdest problem I've had so far with proxmox ? I've been running serial consoles on my servers for more than I can remember. What happens is that when the serial console is enabled (linux ... console=tty0 console=ttyS0,115200n8) I have a pause during the boot process. As...
  11. J

    Ceph large file (images) resync

    Don't forget about me re: sharing notes :) Dell M series... sweet.
  12. J

    Version different between 2 nodes (licensed and open source).

    you probably need to add the pve-no-subscription repository to your open-source node to get updated packages.
  13. J

    Ceph large file (images) resync

    With 4MB blocks you're probably being limited by the network speed (10G ethernet?). The kvm seem to incur a 2x penalty, but perhaps you're running in a default iops limit for kvm (which is good to prevent a single kvm bringing down the cluster). How many servers are you using to host that 36...
  14. J

    Cluster in a box

    I made some more investigation on running ceph+kvm on the same hardware, and according to ceph documentation it's a big no to run mon/osd with virtualization or other concurrent processes, specially when running on the same disks. My take is that ceph will not work properly on limited resources...
  15. J

    Mixing proxmox 2.x and 3.x nodes in the same cluster

    I went forward and tried to upgrade a single 2.3 node to 3.1. The new node seems to be recognized properly by the other cluster members and migration (not live) is taking place. My guess is that it will "kind of work".
  16. J

    Ceph large file (images) resync

    :) To bring back this thread in topic, I'd like to know what happens when a ceph node resets and comes back online. Do the cluster maintain some sort of "map" where modified blocks for each node(file) are kept or does it start a full-scale resync of the KVM machine images? I am told glusterfs...
  17. J

    Mixing proxmox 2.x and 3.x nodes in the same cluster

    I wold like to upgrade a cluster of 3 nodes running proxmox 2.x (originally this was a 1.x cluster). I cannot stop all running vm (openvz and kvm) for the time necessary to upgrade the cluster. Can I upgrade one node at a time? What issues might arise when running during the upgrade process...
  18. J

    Ceph large file (images) resync

    I see no problem in your setup. It's actually an old, little, well-kept secret :) Separating the links over different and independent switches is precisely what you want to avoid the single point of failure and also break the 1G barrier. Unless you want to spend big bucks on stackable and...
  19. J

    Ceph large file (images) resync

    I am really glad you made it at the end with the three separate switches (failover and speed together). I am not sure how much parallelism there is in ceph during the synchronization, but if you're copying from few nodes at a time to recover a failed node, it's surely a nice thing to be able...
  20. J

    Ceph large file (images) resync

    Out of my head, the things to try would be: 1. Use even number of links (start with 2 links) 2. Remove the switches and try connecting two nodes with cross cables (test if switches are a problem). 3. Do you have vlans or bridges stacked on the bond interface? Try running on flat bond with no...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!